aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1904.12904
2943192824
As neural networks have begun performing increasingly critical tasks for society, ranging from driving cars to identifying candidates for drug development, the value of their ability to perform uncertainty quantification (UQ) in their predictions has risen commensurately. Permanent dropout, a popular method for neural network UQ, involves injecting stochasticity into the inference phase of the model and creating many predictions for each of the test data. This shifts the computational and energy burden of deep neural networks from the training phase to the inference phase. Recent work has demonstrated near-lossless conversion of classical deep neural networks to their spiking counterparts. We use these results to demonstrate the feasibility of conducting the inference phase with permanent dropout on spiking neural networks, mitigating the technique's computational and energy burden, which is essential for its use at scale or on edge platforms. We demonstrate the proposed approach via the Nengo spiking neural simulator on a combination drug therapy dataset for cancer treatment, where UQ is critical. Our results indicate that the spiking approximation gives a predictive distribution practically indistinguishable from that given by the classical network.
Dropout @cite_18 is a method for regularization in DNNs. In its simplest form, it involves randomly turning off neurons during each minibatch of training independently with some probability @math . As originally proposed, the inference phase is unmodified aside from a scaling of the weights of each layer (as there are now more units present than during training). The intuition behind the method is that nodes cannot rely on a particular upstream or downstream neuron to modify their output and must instead pass on information that is more generally useful, as well as being forced to learn redundant representations. As outlined in @cite_18 , dropout may be viewed as approximate model averaging over all networks formed by subsets of the full network architecture. Gal and Ghahramani @cite_3 showed that keeping dropout active during prediction (permadropout) is an approximation to a fully Bayesian treatment using a connection between neural networks and Gaussian processes. Each forward evaluation gives a random output; many forward evaluations build up an approximate predictive distribution.
{ "cite_N": [ "@cite_18", "@cite_3" ], "mid": [ "2095705004", "2964059111" ], "abstract": [ "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning." ] }
1904.12904
2943192824
As neural networks have begun performing increasingly critical tasks for society, ranging from driving cars to identifying candidates for drug development, the value of their ability to perform uncertainty quantification (UQ) in their predictions has risen commensurately. Permanent dropout, a popular method for neural network UQ, involves injecting stochasticity into the inference phase of the model and creating many predictions for each of the test data. This shifts the computational and energy burden of deep neural networks from the training phase to the inference phase. Recent work has demonstrated near-lossless conversion of classical deep neural networks to their spiking counterparts. We use these results to demonstrate the feasibility of conducting the inference phase with permanent dropout on spiking neural networks, mitigating the technique's computational and energy burden, which is essential for its use at scale or on edge platforms. We demonstrate the proposed approach via the Nengo spiking neural simulator on a combination drug therapy dataset for cancer treatment, where UQ is critical. Our results indicate that the spiking approximation gives a predictive distribution practically indistinguishable from that given by the classical network.
While SNNs are more powerful than DNNs in terms of theoretical computational ability @cite_11 , their often-discontinuous and computationally expensive nature means that training SNNs has been more challenging in practice than has been training DNNs, an already daunting task and the subject of major research. For this reason, the idea of conducting the training phase on a DNN and finding an SNN with similar behavior is an appealing one. Several approaches have been suggested for converting a DNN to an SNN while minimizing performance loss. @cite_9 focused on converting DNNs with standard nonlinearities such as the softmax or ReLU functions, which was expanded upon by @cite_4 to enable conversion of much more general neural architectures.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_11" ], "mid": [ "1645800954", "2775079417", "2006370340" ], "abstract": [ "Deep neural networks such as Convolutional Networks (ConvNets) and Deep Belief Networks (DBNs) represent the state-of-the-art for many machine learning and computer vision classification problems. To overcome the large computational cost of deep networks, spiking deep networks have recently been proposed, given the specialized hardware now available for spiking neural networks (SNNs). However, this has come at the cost of performance losses due to the conversion from analog neural networks (ANNs) without a notion of time, to sparsely firing, event-driven SNNs. Here we analyze the effects of converting deep ANNs into SNNs with respect to the choice of parameters for spiking neurons such as firing rates and thresholds. We present a set of optimization techniques to minimize performance loss in the conversion process for ConvNets and fully connected deep networks. These techniques yield networks that outperform all previous SNNs on the MNIST database to date, and many networks here are close to maximum performance after only 20 ms of simulated time. The techniques include using rectified linear units (ReLUs) with zero bias during training, and using a new weight normalization method to help regulate firing rates. Our method for converting an ANN into an SNN enables low-latency classification with high accuracies already after the first output spike, and compared with previous SNN approaches it yields improved performance without increased training time. The presented analysis and optimization techniques boost the value of spiking deep networks as an attractive framework for neuromorphic computing platforms aimed at fast and efficient pattern recognition.", "Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.", "Abstract The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e., threshold gates), respectively, sigmoidal gates. In particular it is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. On the other hand, it is known that any function that can be computed by a small sigmoidal neural net can also be computed by a small network of spiking neurons. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology." ] }
1904.12904
2943192824
As neural networks have begun performing increasingly critical tasks for society, ranging from driving cars to identifying candidates for drug development, the value of their ability to perform uncertainty quantification (UQ) in their predictions has risen commensurately. Permanent dropout, a popular method for neural network UQ, involves injecting stochasticity into the inference phase of the model and creating many predictions for each of the test data. This shifts the computational and energy burden of deep neural networks from the training phase to the inference phase. Recent work has demonstrated near-lossless conversion of classical deep neural networks to their spiking counterparts. We use these results to demonstrate the feasibility of conducting the inference phase with permanent dropout on spiking neural networks, mitigating the technique's computational and energy burden, which is essential for its use at scale or on edge platforms. We demonstrate the proposed approach via the Nengo spiking neural simulator on a combination drug therapy dataset for cancer treatment, where UQ is critical. Our results indicate that the spiking approximation gives a predictive distribution practically indistinguishable from that given by the classical network.
Other work @cite_27 requires tailoring the DNN to optimize the SNN's performance, the approach we take in this paper. In particular, we follow the technique outlined in @cite_20 , which simply requires using a specific activation function, termed the SoftLIF function.
{ "cite_N": [ "@cite_27", "@cite_20" ], "mid": [ "2020676607", "2233731247" ], "abstract": [ "Deep-learning neural networks such as convolutional neural network (CNN) have shown great potential as a solution for difficult vision problems, such as object recognition. Spiking neural networks (SNN)-based architectures have shown great potential as a solution for realizing ultra-low power consumption using spike-based neuromorphic hardware. This work describes a novel approach for converting a deep CNN into a SNN that enables mapping CNN to spike-based hardware architectures. Our approach first tailors the CNN architecture to fit the requirements of SNN, then trains the tailored CNN in the same way as one would with CNN, and finally applies the learned network weights to an SNN architecture derived from the tailored CNN. We evaluate the resulting SNN on publicly available Defense Advanced Research Projects Agency (DARPA) Neovision2 Tower and CIFAR-10 datasets and show similar object recognition accuracy as the original CNN. Our SNN implementation is amenable to direct mapping to spike-based neuromorphic hardware, such as the ones being developed under the DARPA SyNAPSE program. Our hardware mapping analysis suggests that SNN implementation on such spike-based hardware is two orders of magnitude more energy-efficient than the original CNN implementation on off-the-shelf FPGA-based hardware.", "We train spiking deep networks using leaky integrate-and-fire (LIF) neurons, and achieve state-of-the-art results for spiking networks on the CIFAR-10 and MNIST datasets. This demonstrates that biologically-plausible spiking LIF neurons can be integrated into deep networks can perform as well as other spiking models (e.g. integrate-and-fire). We achieved this result by softening the LIF response function, such that its derivative remains bounded, and by training the network with noise to provide robustness against the variability introduced by spikes. Our method is general and could be applied to other neuron types, including those used on modern neuromorphic hardware. Our work brings more biological realism into modern image classification models, with the hope that these models can inform how the brain performs this difficult task. It also provides new methods for training deep networks to run on neuromorphic hardware, with the aim of fast, power-efficient image classification for robotics applications." ] }
1904.12904
2943192824
As neural networks have begun performing increasingly critical tasks for society, ranging from driving cars to identifying candidates for drug development, the value of their ability to perform uncertainty quantification (UQ) in their predictions has risen commensurately. Permanent dropout, a popular method for neural network UQ, involves injecting stochasticity into the inference phase of the model and creating many predictions for each of the test data. This shifts the computational and energy burden of deep neural networks from the training phase to the inference phase. Recent work has demonstrated near-lossless conversion of classical deep neural networks to their spiking counterparts. We use these results to demonstrate the feasibility of conducting the inference phase with permanent dropout on spiking neural networks, mitigating the technique's computational and energy burden, which is essential for its use at scale or on edge platforms. We demonstrate the proposed approach via the Nengo spiking neural simulator on a combination drug therapy dataset for cancer treatment, where UQ is critical. Our results indicate that the spiking approximation gives a predictive distribution practically indistinguishable from that given by the classical network.
Given a sufficiently large constant input to trigger an action potential, the firing rate of a linear Leaky Integrate and Fire (LIF) neuron with input current @math is given by @cite_23
{ "cite_N": [ "@cite_23" ], "mid": [ "345956458" ], "abstract": [ "1. Introduction Part I. Single Neuron Models: 2. Detailed neuron models 3. Two-dimensional neuron models 4. Formal spiking neuron models 5. Noise in spiking neuron models Part II. Population Models: 6. Population equations 7. Signal transmission and neuronal coding 8. Oscillations and synchrony 9. Spatially structured networks Part III. Models of Synaptic Plasticity: 10. Hebbian models 11. Learning equations 12. Plasticity and coding Bibliography Index." ] }
1904.12904
2943192824
As neural networks have begun performing increasingly critical tasks for society, ranging from driving cars to identifying candidates for drug development, the value of their ability to perform uncertainty quantification (UQ) in their predictions has risen commensurately. Permanent dropout, a popular method for neural network UQ, involves injecting stochasticity into the inference phase of the model and creating many predictions for each of the test data. This shifts the computational and energy burden of deep neural networks from the training phase to the inference phase. Recent work has demonstrated near-lossless conversion of classical deep neural networks to their spiking counterparts. We use these results to demonstrate the feasibility of conducting the inference phase with permanent dropout on spiking neural networks, mitigating the technique's computational and energy burden, which is essential for its use at scale or on edge platforms. We demonstrate the proposed approach via the Nengo spiking neural simulator on a combination drug therapy dataset for cancer treatment, where UQ is critical. Our results indicate that the spiking approximation gives a predictive distribution practically indistinguishable from that given by the classical network.
where @math . Unfortunately, this function is not continuously differentiable, complicating gradient-based optimization methods. To resolve this issue, Hunsberger and Eliasmith @cite_20 suggest replacing @math with a smooth approximation given by
{ "cite_N": [ "@cite_20" ], "mid": [ "2233731247" ], "abstract": [ "We train spiking deep networks using leaky integrate-and-fire (LIF) neurons, and achieve state-of-the-art results for spiking networks on the CIFAR-10 and MNIST datasets. This demonstrates that biologically-plausible spiking LIF neurons can be integrated into deep networks can perform as well as other spiking models (e.g. integrate-and-fire). We achieved this result by softening the LIF response function, such that its derivative remains bounded, and by training the network with noise to provide robustness against the variability introduced by spikes. Our method is general and could be applied to other neuron types, including those used on modern neuromorphic hardware. Our work brings more biological realism into modern image classification models, with the hope that these models can inform how the brain performs this difficult task. It also provides new methods for training deep networks to run on neuromorphic hardware, with the aim of fast, power-efficient image classification for robotics applications." ] }
1904.12760
2942263598
Recently, differentiable search methods have made major progress in reducing the computational costs of neural architecture search. However, these approaches often report lower accuracy in evaluating the searched architecture or transferring it to another dataset. This is arguably due to the large gap between the architecture depths in search and evaluation scenarios. In this paper, we present an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure. This brings two issues, namely, heavier computational overheads and weaker search stability, which we solve using search space approximation and regularization, respectively. With a significantly reduced search time ( 7 hours on a single GPU), our approach achieves state-of-the-art performance on both the proxy dataset (CIFAR10 or CIFAR100) and the target dataset (ImageNet). Code is available at this https URL.
Image recognition is a fundamental task in computer vision. Recent years, with the development of deep learning, convolutional neural networks (CNNs) have been dominating image recognition @cite_2 . A few handcrafted architectures have been proposed, including VGGNet @cite_16 , ResNet @cite_9 , DenseNet @cite_32 , etc. , all of which verified the importance of human experts in network design.
{ "cite_N": [ "@cite_9", "@cite_16", "@cite_32", "@cite_2" ], "mid": [ "2949650786", "2962835968", "2511730936", "2163605009" ], "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL .", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry." ] }
1904.12760
2942263598
Recently, differentiable search methods have made major progress in reducing the computational costs of neural architecture search. However, these approaches often report lower accuracy in evaluating the searched architecture or transferring it to another dataset. This is arguably due to the large gap between the architecture depths in search and evaluation scenarios. In this paper, we present an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure. This brings two issues, namely, heavier computational overheads and weaker search stability, which we solve using search space approximation and regularization, respectively. With a significantly reduced search time ( 7 hours on a single GPU), our approach achieves state-of-the-art performance on both the proxy dataset (CIFAR10 or CIFAR100) and the target dataset (ImageNet). Code is available at this https URL.
Our work belongs to the emerging field of neural architecture search (NAS), a process of automating architecture engineering technique @cite_23 . Pioneer researchers started to explore the possibility of automatically generating better topology with evolutionary algorithms in the 2000's @cite_21 . Early NAS works tried to search for a complete network topology @cite_4 @cite_0 while recent works focused on finding robust cells @cite_18 @cite_36 @cite_19 . Lately, EA-based @cite_3 and RL-based @cite_5 NAS approaches achieved state-of-the-art performance in image recognition, where architectures were sampled and evaluated from the search space under the guidance of an EA-based or RL-based meta-controller. A notable drawback of the above approaches is the expensive computational overhead (3,150 GPU-days for EA-based AmoebaNet @cite_3 and 1,800 GPU-days for RL-based NASNet @cite_5 ). PNAS proposed to learn a surrogate model to guide the search through the structure space, achieving 5 @math speedup than NASNet. ENAS @cite_29 proposed to share parameters among child models to prevent evaluating candidate architectures by training them from scratch, which significantly reduced the search cost to less than one GPU-day.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_36", "@cite_29", "@cite_21", "@cite_3", "@cite_0", "@cite_19", "@cite_23", "@cite_5" ], "mid": [ "", "2556833785", "", "2785366763", "2148872333", "2785430118", "", "", "2885311373", "2964081807" ], "abstract": [ "", "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using @math -learning with an @math -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.", "", "We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Thanks to parameter sharing between child models, ENAS is fast: it delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS designs novel architectures that achieve a test error of 2.89 , which is on par with NASNet (, 2018), whose test error is 2.65 .", "", "The effort devoted to hand-crafting image classifiers has motivated the use of architecture search to discover them automatically. Reinforcement learning and evolution have both shown promise for this purpose. This study introduces a regularized version of a popular asynchronous evolutionary algorithm. We rigorously compare it to the non-regularized form and to a highly-successful reinforcement learning baseline. Using the same hardware, compute effort and neural network training code, we conduct repeated experiments side-by-side, exploring different datasets, search spaces and scales. We show regularized evolution consistently produces models with similar or higher accuracy, across a variety of contexts without need for re-tuning parameters. In addition, regularized evolution exhibits considerably better performance than reinforcement learning at early search stages, suggesting it may be the better choice when fewer compute resources are available. This constitutes the first controlled comparison of the two search algorithms in this context. Finally, we present new architectures discovered with regularized evolution that we nickname AmoebaNets. These models set a new state of the art for CIFAR-10 (mean test error = 2.13 ) and mobile-size ImageNet (top-5 accuracy = 92.1 with 5.06M parameters), and reach the current state of the art for ImageNet (top-5 accuracy = 96.2 ).", "", "", "Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.", "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset." ] }
1904.12760
2942263598
Recently, differentiable search methods have made major progress in reducing the computational costs of neural architecture search. However, these approaches often report lower accuracy in evaluating the searched architecture or transferring it to another dataset. This is arguably due to the large gap between the architecture depths in search and evaluation scenarios. In this paper, we present an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure. This brings two issues, namely, heavier computational overheads and weaker search stability, which we solve using search space approximation and regularization, respectively. With a significantly reduced search time ( 7 hours on a single GPU), our approach achieves state-of-the-art performance on both the proxy dataset (CIFAR10 or CIFAR100) and the target dataset (ImageNet). Code is available at this https URL.
DARTS @cite_1 introduced a differentiable NAS framework, which achieved remarkable performance and efficiency improvement. Following DARTS, SNAS @cite_38 proposed to constrain the architecture parameters to be one-hot to tackle the inconsistency in optimizing objectives between search and evaluation scenarios. ProxylessNAS @cite_10 adopted the differentiable framework and proposed to search architectures on the target task instead of adopting the conventional proxy-based framework.
{ "cite_N": [ "@cite_38", "@cite_1", "@cite_10" ], "mid": [ "2905672847", "2951104886", "2902251695" ], "abstract": [ "We propose Stochastic Neural Architecture Search (SNAS), an economical end-to-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of back-propagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes less epochs to find a cell architecture with state-of-the-art accuracy than non-differentiable evolution-based and reinforcement-learning-based NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets.", "This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.", "Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. @math GPU hours) makes it difficult to search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present that can learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08 test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6 @math fewer parameters. On ImageNet, our model achieves 3.1 better top-1 accuracy than MobileNetV2, while being 1.2 @math faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design." ] }
1904.12816
2940494670
With the advancement of technology and subsequently the age of digital information, online trustworthy identification has become increasingly more important. With respect to the various data breaches and privacy regulations, the current identity solutions are not fully optimized. In this paper, we will take a look at several Self-Sovereign Identity solutions which are already available. Some of them are built upon blockchain technology as this already provides decentralised persistent data and consensus. We will explore the emerging landscape of Self-Sovereign Identity solutions and dissect their implementations under multiple aspect criteria to determine the necessity of blockchain technology in this field. We conclude that blockchain technology is not explicitly required for a Self-Sovereign Identity solution but it is a good foundation to build up on, due to various technical advantages that the blockchain has to offer.
In by K. Cameron @cite_1 , seven laws are described that explain the successes and failures of digital identity systems. K. Cameron states that these laws are necessary to avoid any side-effects. The laws and their explanation are extensive and explain the requirements of solutions in detail. However, some of these laws can be more distinctive. The first law for example; could be split into and . Some implementations may satisfy one of these properties, but not the other.
{ "cite_N": [ "@cite_1" ], "mid": [ "1523091741" ], "abstract": [ "This paper is about how we can prevent the loss of trust and go forward to give Internet users a deep sense of safety, privacy, and certainty about whom they are relating to in cyberspace. Nothing could be more essential if Web-based services and applications are to continue to move beyond \"cyber publication\" and encompass all kinds of interaction and services. Our approach has been to develop a formal understanding of the dynamics causing digital identity systems to succeed or fail in various contexts, expressed as the Laws of Identity. Taken together, these laws define a unifying identity metasystem that can offer the Internet the identity layer it so obviously requires." ] }
1904.12816
2940494670
With the advancement of technology and subsequently the age of digital information, online trustworthy identification has become increasingly more important. With respect to the various data breaches and privacy regulations, the current identity solutions are not fully optimized. In this paper, we will take a look at several Self-Sovereign Identity solutions which are already available. Some of them are built upon blockchain technology as this already provides decentralised persistent data and consensus. We will explore the emerging landscape of Self-Sovereign Identity solutions and dissect their implementations under multiple aspect criteria to determine the necessity of blockchain technology in this field. We conclude that blockchain technology is not explicitly required for a Self-Sovereign Identity solution but it is a good foundation to build up on, due to various technical advantages that the blockchain has to offer.
In by Q. Stokkink and J. Pouwelse @cite_9 , these ten principles of C. Allen are used in assessing the digital identity solution presented in their paper. Q. Stokkink and J. Pouwelse add an extra property to this list, which requires claims to be provable.
{ "cite_N": [ "@cite_9" ], "mid": [ "2807661999" ], "abstract": [ "Digital identity is unsolved: after many years of research there is still no trusted communication over the Internet. To provide identity within the context of mutual distrust, this paper presents a blockchain-based digital identity solution. Without depending upon a single trusted third party, the proposed solution achieves passport-level legally valid identity. This solution for making identities Self-Sovereign, builds on a generic provable claim model for which attestations of truth from third parties need to be collected. The claim model is then shown to be both blockchain structure and proof method agnostic. Four different implementations in support of these two claim model properties are shown to offer sub-second performance for claim creation and claim verification. Through the properties of Self-Sovereign Identity, legally valid status and acceptable performance, our solution is considered to be fit for adoption by the general public." ] }
1904.12816
2940494670
With the advancement of technology and subsequently the age of digital information, online trustworthy identification has become increasingly more important. With respect to the various data breaches and privacy regulations, the current identity solutions are not fully optimized. In this paper, we will take a look at several Self-Sovereign Identity solutions which are already available. Some of them are built upon blockchain technology as this already provides decentralised persistent data and consensus. We will explore the emerging landscape of Self-Sovereign Identity solutions and dissect their implementations under multiple aspect criteria to determine the necessity of blockchain technology in this field. We conclude that blockchain technology is not explicitly required for a Self-Sovereign Identity solution but it is a good foundation to build up on, due to various technical advantages that the blockchain has to offer.
X. Zhu and Y. Badr explored the possibilities of current available authentication solutions for the internet of things @cite_23 . Although attestations about an identity are not required for the internet of things, several aspects do overlay like scalability, as a large population must be capable to use the system, and interoperability to prevent reliance on a single provider. It briefly touches upon multiple implementations used in this survey but does not conclude anything.
{ "cite_N": [ "@cite_23" ], "mid": [ "2902354764" ], "abstract": [ "The Internet of Things aims at connecting everything, ranging from individuals, organizations, and companies to things in the physical and virtual world. The digital identity has always been considered as the keystone for all online services and the foundation for building security mechanisms such as authentication and authorization. However, the current literature still lacks a comprehensive study on the digital identity management for the Internet of Things (IoT). In this paper, we firstly identify the requirements of building identity management systems for IoT, which comprises scalability, interoperability, mobility, security and privacy. Then, we trace the identity problem back to the origin in philosophy, analyze the Internet digital identity management solutions in the context of IoT and investigate recent surging blockchain sovereign identity solutions. Finally, we point out the promising future research trends in building IoT identity management systems and elaborate challenges of building a complete identity management system for the IoT, including access control, privacy preserving, trust and performance respectively." ] }
1904.12816
2940494670
With the advancement of technology and subsequently the age of digital information, online trustworthy identification has become increasingly more important. With respect to the various data breaches and privacy regulations, the current identity solutions are not fully optimized. In this paper, we will take a look at several Self-Sovereign Identity solutions which are already available. Some of them are built upon blockchain technology as this already provides decentralised persistent data and consensus. We will explore the emerging landscape of Self-Sovereign Identity solutions and dissect their implementations under multiple aspect criteria to determine the necessity of blockchain technology in this field. We conclude that blockchain technology is not explicitly required for a Self-Sovereign Identity solution but it is a good foundation to build up on, due to various technical advantages that the blockchain has to offer.
In @cite_20 a comparison between Sovrin, uPort and ShoCard with respect to the seven laws of identity of K. Cameron is made. Sovrin and uPort are solutions whereas ShoCard is called a decentralized trusted identity. With ShoCard, identity proofing of users based on existing trusted credentials is stored on a blockchain. It is concluded that distributed ledger technology is not a silver bullet and especially the usability aspect has to improve.
{ "cite_N": [ "@cite_20" ], "mid": [ "2964062734" ], "abstract": [ "The emergence of distributed ledger technology (DLT) based on a blockchain data structure has given rise to new approaches to identity management that aim to upend dominant approaches to providing and consuming digital identities. These new approaches to identity management (IdM) propose to enhance de-centralization, transparency, and user control in transactions that involve identity information; however, given the historical challenge to design IdM, can these new DLT-based schemes deliver on their lofty goals? We introduce the emerging landscape of DLT-based IdM and evaluate three representative proposals—uPort, ShoCard, and Sovrin—using the analytic lens of a seminal framework that characterizes the nature of successful IdM schemes." ] }
1904.12691
2942034515
We reformulate the option framework as two parallel augmented MDPs. Under this novel formulation, all policy optimization algorithms can be used off the shelf to learn intra-option policies, option termination conditions, and a master policy over options. We apply an actor-critic algorithm on each augmented MDP, yielding the Double Actor-Critic (DAC) architecture. Furthermore, we show that, when state-value functions are used as critics, one critic can be expressed in terms of the other, and hence only one critic is necessary. Our experiments on challenging robot simulation tasks demonstrate that DAC outperforms previous gradient-based option learning algorithms by a large margin and significantly outperforms its hierarchy-free counterparts in a transfer learning setting.
Many components in @math are not new. The idea of an augmented MDP is suggested by @cite_9 in AHP. The augmented state spaces @math and @math are also used by @cite_2 to simplify the derivation. Applying vanilla policy gradient to @math and @math leads immediately to the Intra-Option Policy Gradient Theorem . The augmented policy @math is also used by @cite_10 to simplify the derivation. However, neither OC nor IOPG works on the augmented state space directly. To the best of our knowledge, @math is the first time that the two augmented MDPs are formulated explicitly. It is this explicit formulation that allows the off-the-shelf application of all state-of-the-art policy optimizations algorithm and combines advantages from both OC and AHP, yielding a significant empirical performance boost. Furthermore, it is this explicit formulation that generates a family of policy-based intra-option algorithms for master policy learning.
{ "cite_N": [ "@cite_9", "@cite_10", "@cite_2" ], "mid": [ "2097828232", "2785940258", "2523728418" ], "abstract": [ "Temporally extended actions (or macro-actions) have proven useful for speeding up planning and learning, adding robustness, and building prior knowledge into AI systems. The options framework, as introduced in Sutton, Precup and Singh (1999), provides a natural way to incorporate macro-actions into reinforcement learning. In the subgoals approach, learning is divided into two phases, first learning each option with a prescribed subgoal, and then learning to compose the learned options together. In this paper we offer a unified framework for concurrent inter- and intra-options learning. To that end, we propose a modular parameterization of intra-option policies together with option termination conditions and the option selection policy (inter options), and show that these three decision components may be viewed as a unified policy over an augmented state-action space, to which standard policy gradient algorithms may be applied. We identify the basis functions that apply to each of these decision components, and show that they possess a useful orthogonality property that allows to compute the natural gradient independently for each component. We further outline the extension of the suggested framework to several levels of options hierarchy, and conclude with a brief illustrative example.", "In the pursuit of increasingly intelligent learning systems, abstraction plays a vital role in enabling sophisticated decisions to be made in complex environments. The options framework provides formalism for such abstraction over sequences of decisions. However most models require that options be given a priori, presumably specified by hand, which is neither efficient, nor scalable. Indeed, it is preferable to learn options directly from interaction with the environment. Despite several efforts, this remains a difficult problem: many approaches require access to a model of the environmental dynamics, and inferred options are often not interpretable, which limits our ability to explain the system behavior for verification or debugging purposes. In this work we develop a novel policy gradient method for the automatic learning of policies with options. This algorithm uses inference methods to simultaneously improve all of the options available to an agent, and thus can be employed in an off-policy manner, without observing option labels. Experimental results show that the options learned can be interpreted. Further, we find that the method presented here is more sample efficient than existing methods, leading to faster and more stable learning of policies with options.", "Temporal abstraction is key to scaling up learning and planning in reinforcement learning. While planning with temporally extended actions is well understood, creating such abstractions autonomously from data has remained challenging. We tackle this problem in the framework of options [Sutton, Precup & Singh, 1999; Precup, 2000]. We derive policy gradient theorems for options and propose a new option-critic architecture capable of learning both the internal policies and the termination conditions of options, in tandem with the policy over options, and without the need to provide any additional rewards or subgoals. Experimental results in both discrete and continuous environments showcase the flexibility and efficiency of the framework." ] }
1904.12535
2951383997
In this paper, we consider the problem of open information extraction (OIE) for extracting entity and relation level intermediate structures from sentences in open-domain. We focus on four types of valuable intermediate structures (Relation, Attribute, Description, and Concept), and propose a unified knowledge expression form, SAOKE, to express them. We publicly release a data set which contains more than forty thousand sentences and the corresponding facts in the SAOKE format labeled by crowd-sourcing. To our knowledge, this is the largest publicly available human labeled data set for open information extraction tasks. Using this labeled SAOKE data set, we train an end-to-end neural model using the sequenceto-sequence paradigm, called Logician, to transform sentences into facts. For each sentence, different to existing algorithms which generally focus on extracting each single fact without concerning other possible facts, Logician performs a global optimization over all possible involved facts, in which facts not only compete with each other to attract the attention of words, but also cooperate to share words. An experimental study on various types of open domain relation extraction tasks reveals the consistent superiority of Logician to other states-of-the-art algorithms. The experiments verify the reasonableness of SAOKE format, the valuableness of SAOKE data set, the effectiveness of the proposed Logician model, and the feasibility of the methodology to apply end-to-end learning paradigm on supervised data sets for the challenging tasks of open information extraction.
Tuple is the most common knowledge expression format for OIE systems to express n-ary relation between subject and objects. Beyond such information, ClausIE @cite_51 extracts extra information in the tuples: a complement, and one or more adverbials, and OLLIE @cite_36 extracts additional context information. SAOKE is able to express n-ary relations, and can be easily extended to support the knowledge extracted by ClausIE, but needs to be redesigned to support context information, which belongs to our future work.
{ "cite_N": [ "@cite_36", "@cite_51" ], "mid": [ "2129842875", "1529731474" ], "abstract": [ "Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, state-of-the-art Open IE systems such as ReVerb and woe share two important weaknesses -- (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents ollie, a substantially improved Open IE system that addresses both these limitations. First, ollie achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. ollie obtains 2.7 times the area under precision-yield curve (AUC) compared to ReVerb and 1.9 times the AUC of woeparse.", "We propose ClausIE, a novel, clause-based approach to open information extraction, which extracts relations and their arguments from natural language text. ClausIE fundamentally differs from previous approaches in that it separates the detection of useful'' pieces of information expressed in a sentence from their representation in terms of extractions. In more detail, ClausIE exploits linguistic knowledge about the grammar of the English language to first detect clauses in an input sentence and to subsequently identify the type of each clause according to the grammatical function of its constituents. Based on this information, ClausIE is able to generate high-precision extractions; the representation of these extractions can be flexibly customized to the underlying application. ClausIE is based on dependency parsing and a small set of domain-independent lexica, operates sentence by sentence without any post-processing, and requires no training data (whether labeled or unlabeled). Our experimental study on various real-world datasets suggests that ClausIE obtains higher recall and higher precision than existing approaches, both on high-quality text as well as on noisy text as found in the web." ] }
1904.12535
2951383997
In this paper, we consider the problem of open information extraction (OIE) for extracting entity and relation level intermediate structures from sentences in open-domain. We focus on four types of valuable intermediate structures (Relation, Attribute, Description, and Concept), and propose a unified knowledge expression form, SAOKE, to express them. We publicly release a data set which contains more than forty thousand sentences and the corresponding facts in the SAOKE format labeled by crowd-sourcing. To our knowledge, this is the largest publicly available human labeled data set for open information extraction tasks. Using this labeled SAOKE data set, we train an end-to-end neural model using the sequenceto-sequence paradigm, called Logician, to transform sentences into facts. For each sentence, different to existing algorithms which generally focus on extracting each single fact without concerning other possible facts, Logician performs a global optimization over all possible involved facts, in which facts not only compete with each other to attract the attention of words, but also cooperate to share words. An experimental study on various types of open domain relation extraction tasks reveals the consistent superiority of Logician to other states-of-the-art algorithms. The experiments verify the reasonableness of SAOKE format, the valuableness of SAOKE data set, the effectiveness of the proposed Logician model, and the feasibility of the methodology to apply end-to-end learning paradigm on supervised data sets for the challenging tasks of open information extraction.
However, Logician is not limited to relation extraction task. First, Logician extracts more information beyond relations. Second, Logician focuses on examining how natural languages express facts @cite_2 , and producing helpful intermediate structures for high level tasks.
{ "cite_N": [ "@cite_2" ], "mid": [ "2471366537" ], "abstract": [ "How do we scale information extraction to the massive size and unprecedented heterogeneity of the Web corpus? Beginning in 2003, our KnowItAll project has sought to extract high-quality knowledge from the Web. In 2007, we introduced the Open Information Extraction (Open IE) paradigm which eschews hand-labeled training examples, and avoids domain-specific verbs and nouns, to develop unlexicalized, domain-independent extractors that scale to the Web corpus. Open IE systems have extracted billions of assertions as the basis for both common-sense knowledge and novel question-answering systems. This paper describes the second generation of Open IE systems, which rely on a novel model of how relations and their arguments are expressed in English sentences to double precision recall compared with previous systems such as TEXTRUNNER and WOE." ] }
1904.12535
2951383997
In this paper, we consider the problem of open information extraction (OIE) for extracting entity and relation level intermediate structures from sentences in open-domain. We focus on four types of valuable intermediate structures (Relation, Attribute, Description, and Concept), and propose a unified knowledge expression form, SAOKE, to express them. We publicly release a data set which contains more than forty thousand sentences and the corresponding facts in the SAOKE format labeled by crowd-sourcing. To our knowledge, this is the largest publicly available human labeled data set for open information extraction tasks. Using this labeled SAOKE data set, we train an end-to-end neural model using the sequenceto-sequence paradigm, called Logician, to transform sentences into facts. For each sentence, different to existing algorithms which generally focus on extracting each single fact without concerning other possible facts, Logician performs a global optimization over all possible involved facts, in which facts not only compete with each other to attract the attention of words, but also cooperate to share words. An experimental study on various types of open domain relation extraction tasks reveals the consistent superiority of Logician to other states-of-the-art algorithms. The experiments verify the reasonableness of SAOKE format, the valuableness of SAOKE data set, the effectiveness of the proposed Logician model, and the feasibility of the methodology to apply end-to-end learning paradigm on supervised data sets for the challenging tasks of open information extraction.
Efforts had been made to map natural language sentences into logical form. Some approaches such as @cite_47 @cite_24 @cite_11 @cite_48 learn the mapping under the supervision of manually labeled logical forms, while others @cite_12 @cite_29 are indirectly supervised by distant information, system rewards, etc. However, all previous works rely on a pre-defined, domain specific logical system, which limits their ability to learn facts out of the pre-defined logical system.
{ "cite_N": [ "@cite_48", "@cite_29", "@cite_24", "@cite_47", "@cite_12", "@cite_11" ], "mid": [ "2963794306", "2251673953", "1496189301", "2107618763", "2163561827", "2295953541" ], "abstract": [ "Semantic parsing aims at mapping natural language to machine interpretable meaning representations. Traditional approaches rely on high-quality lexicons, manually-built templates, and linguistic features which are either domainor representation-specific. In this paper we present a general method based on an attention-enhanced encoder-decoder model. We encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors. Experimental results on four datasets show that our approach performs competitively without using hand-engineered features and is easy to adapt across domains and meaning representations.", "We consider the challenge of learning semantic parsers that scale to large, open-domain problems, such as question answering with Freebase. In such settings, the sentences cover a wide variety of topics and include many phrases whose meaning is difficult to represent in a fixed target ontology. For example, even simple phrases such as ‘daughter’ and ‘number of people living in’ cannot be directly represented in Freebase, whose ontology instead encodes facts about gender, parenthood, and population. In this paper, we introduce a new semantic parsing approach that learns to resolve such ontological mismatches. The parser is learned from question-answer pairs, uses a probabilistic CCG to build linguistically motivated logicalform meaning representations, and includes an ontology matching model that adapts the output logical forms for each target ontology. Experiments demonstrate state-of-the-art performance on two benchmark semantic parsing datasets, including a nine point accuracy improvement on a recent Freebase QA corpus.", "This paper addresses the problem of mapping natural language sentences to lambda–calculus encodings of their meaning. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. We apply the method to the task of learning natural language interfaces to databases and show that the learned parsers outperform previous methods in two benchmark database domains.", "This paper presents a method for inducing transformation rules that map natural-language sentences into a formal query or command language. The approach assumes a formal grammar for the target representation language and learns transformation rules that exploit the non-terminal symbols in this grammar. The learned transformation rules incrementally map a natural-language sentence or its syntactic parse tree into a parse-tree for the target formal language. Experimental results are presented for two corpora. one which maps English instructions into an existing formal coaching language for simulated RoboCup soccer agents, and another which maps English U.S.-geography questions into a database query language. We show that our method performs overall better and faster than previous approaches in both domains.", "Supervised training procedures for semantic parsers produce high-quality semantic parsers, but they have difficulty scaling to large databases because of the sheer number of logical constants for which they must see labeled training data. We present a technique for developing semantic parsers for large databases based on a reduction to standard supervised training algorithms, schema matching, and pattern learning. Leveraging techniques from each of these areas, we develop a semantic parser for Freebase that is capable of parsing questions with an F1 that improves by 0.42 over a purely-supervised learning algorithm.", "" ] }
1904.12257
2941689626
Video deblurring is a challenging task due to the spatially variant blur caused by camera shake, object motions, and depth variations, etc. Existing methods usually estimate optical flow in the blurry video to align consecutive frames or approximate blur kernels. However, they tend to generate artifacts or cannot effectively remove blur when the estimated optical flow is not accurate. To overcome the limitation of separate optical flow estimation, we propose a Spatio-Temporal Filter Adaptive Network (STFAN) for the alignment and deblurring in a unified framework. The proposed STFAN takes both blurry and restored images of the previous frame as well as blurry image of the current frame as input, and dynamically generates the spatially adaptive filters for the alignment and deblurring. We then propose a new Filter Adaptive Convolutional (FAC) layers to align the deblurred features of the previous frame with the current frame and remove the spatially variant blur from the features of the current frame. Finally, we develop a reconstruction network which takes the fusion of two transformed features to restore the clear frames. Both quantitative and qualitative evaluation results on the benchmark datasets and real-world videos demonstrate that the proposed algorithm performs favorably against state-of-the-art methods in terms of accuracy, speed as well as model size.
With the development of deep learning, many CNN-based methods have been proposed to solve dynamic scene deblurring. Method @cite_43 and @cite_22 utilize CNNs to estimate the non-uniform blur kernels. However, the predicted kernels are line-shaped which are inaccurate in some scenarios, and time-consuming conventional non-blind deblurring @cite_3 is generally required to restore the sharp image. More recently, many end-to-end CNN models @cite_18 @cite_35 @cite_26 @cite_55 @cite_56 have also been proposed for image deblurring. To obtain a large receptive field for handling the large blur, the multi-scale strategy is used in @cite_18 @cite_44 . In order to deal with dynamic scene blur, Zhang @cite_0 use spatially variant RNNs @cite_19 to remove blur in feature space with a generated RNN weights by a neural network. However, compared with the video-based method, the accuracy of RNN weights is highly limited to having only a single blurry image as input. To reduce the difficulty of restoration and ensures color consistency, Noroozi @cite_28 build skip connections between the input and output. The adversarial loss is used in @cite_44 @cite_26 to generate sharper images with more details.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_26", "@cite_22", "@cite_28", "@cite_55", "@cite_3", "@cite_56", "@cite_44", "@cite_43", "@cite_0", "@cite_19" ], "mid": [ "", "2786875726", "", "2564023417", "2579111433", "", "2172275395", "", "2560533888", "1916935112", "2798735168", "2519373937" ], "abstract": [ "", "In single image deblurring, the \"coarse-to-fine\" scheme, i.e. gradually restoring the sharp image on different resolutions in a pyramid, is very successful in both traditional optimization-based methods and recent neural-network-based approaches. In this paper, we investigate this strategy and propose a Scale-recurrent Network (SRN-DeblurNet) for this deblurring task. Compared with the many recent learning-based approaches in [25], it has a simpler network structure, a smaller number of parameters and is easier to train. We evaluate our method on large-scale deblurring datasets with complex motion. Results show that our method can produce better quality results than state-of-the-arts, both quantitatively and qualitatively.", "", "Removing pixel-wise heterogeneous motion blur is challenging due to the ill-posed nature of the problem. The predominant solution is to estimate the blur kernel by adding a prior, but extensive literature on the subject indicates the difficulty in identifying a prior which is suitably informative, and general. Rather than imposing a prior based on theory, we propose instead to learn one from the data. Learning a prior over the latent image would require modeling all possible image content. The critical observation underpinning our approach, however, is that learning the motion flow instead allows the model to focus on the cause of the blur, irrespective of the image content. This is a much easier learning task, but it also avoids the iterative process through which latent image priors are typically applied. Our approach directly estimates the motion flow from the blurred image through a fully-convolutional deep neural network (FCN) and recovers the unblurred image from the estimated motion flow. Our FCN is the first universal end-to-end mapping from the blurred image to the dense motion flow. To train the FCN, we simulate motion flows to generate synthetic blurred-image-motion-flow pairs thus avoiding the need for human labeling. Extensive experiments on challenging realistic blurred images demonstrate that the proposed method outperforms the state-of-the-art.", "We propose a deep learning approach to remove motion blur from a single image captured in the wild, i.e., in an uncontrolled setting. Thus, we consider motion blur degradations that are due to both camera and object motion, and by occlusion and coming into view of objects. In this scenario, a model-based approach would require a very large set of parameters, whose fitting is a challenge on its own. Hence, we take a data-driven approach and design both a novel convolutional neural network architecture and a dataset for blurry images with ground truth. The network produces directly the sharp image as output and is built into three pyramid stages, which allow to remove blur gradually from a small amount, at the lowest scale, to the full amount, at the scale of the input image. To obtain corresponding blurry and sharp image pairs, we use videos from a high frame-rate video camera. For each small video clip we select the central frame as the sharp image and use the frame average as the corresponding blurred image. Finally, to ensure that the averaging process is a sufficient approximation to real blurry images we estimate optical flow and select frames with pixel displacements smaller than a pixel. We demonstrate state of the art performance on datasets with both synthetic and real images.", "", "Learning good image priors is of utmost importance for the study of vision, computer vision and image processing applications. Learning priors and optimizing over whole images can lead to tremendous computational challenges. In contrast, when we work with small image patches, it is possible to learn priors and perform patch restoration very efficiently. This raises three questions - do priors that give high likelihood to the data also lead to good performance in restoration? Can we use such patch based priors to restore a full image? Can we learn better patch priors? In this work we answer these questions. We compare the likelihood of several patch models and show that priors that give high likelihood to data perform better in patch restoration. Motivated by this result, we propose a generic framework which allows for whole image restoration using any patch based prior for which a MAP (or approximate MAP) estimate can be calculated. We show how to derive an appropriate cost function, how to optimize it and how to use it to restore whole images. Finally, we present a generic, surprisingly simple Gaussian Mixture prior, learned from a set of natural images. When used with the proposed framework, this Gaussian Mixture Model outperforms all other generic prior methods for image denoising, deblurring and inpainting.", "", "Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.", "In this paper, we address the problem of estimating and removing non-uniform motion blur from a single blurry image. We propose a deep learning approach to predicting the probabilistic distribution of motion blur at the patch level using a convolutional neural network (CNN). We further extend the candidate set of motion kernels predicted by the CNN using carefully designed image rotations. A Markov random field model is then used to infer a dense non-uniform motion blur field enforcing motion smoothness. Finally, motion blur is removed by a non-uniform deblurring model using patch-level image prior. Experimental evaluations show that our approach can effectively estimate and remove complex non-uniform motion blur that is not handled well by previous approaches.", "Due to the spatially variant blur caused by camera shake and object motions under different scene depths, deblurring images captured from dynamic scenes is challenging. Although recent works based on deep neural networks have shown great progress on this problem, their models are usually large and computationally expensive. In this paper, we propose a novel spatially variant neural network to address the problem. The proposed network is composed of three deep convolutional neural networks (CNNs) and a recurrent neural network (RNN). RNN is used as a deconvolution operator performed on feature maps extracted from the input image by one of the CNNs. Another CNN is used to learn the weights for the RNN at every location. As a result, the RNN is spatially variant and could implicitly model the deblurring process with spatially variant kernels. The third CNN is used to reconstruct the final deblurred feature maps into restored image. The whole network is end-to-end trainable. Our analysis shows that the proposed network has a large receptive field even with a small model size. Quantitative and qualitative evaluations on public datasets demonstrate that the proposed method performs favorably against state-of-the-art algorithms in terms of accuracy, speed, and model size.", "In this paper, we consider numerous low-level vision problems (e.g., edge-preserving filtering and denoising) as recursive image filtering via a hybrid neural network. The network contains several spatially variant recurrent neural networks (RNN) as equivalents of a group of distinct recursive filters for each pixel, and a deep convolutional neural network (CNN) that learns the weights of RNNs. The deep CNN can learn regulations of recurrent propagation for various tasks and effectively guides recurrent propagation over an entire image. The proposed model does not need a large number of convolutional channels nor big kernels to learn features for low-level vision filters. It is significantly smaller and faster in comparison with a deep CNN based image filter. Experimental results show that many low-level vision tasks can be effectively learned and carried out in real-time by the proposed algorithm." ] }
1904.12257
2941689626
Video deblurring is a challenging task due to the spatially variant blur caused by camera shake, object motions, and depth variations, etc. Existing methods usually estimate optical flow in the blurry video to align consecutive frames or approximate blur kernels. However, they tend to generate artifacts or cannot effectively remove blur when the estimated optical flow is not accurate. To overcome the limitation of separate optical flow estimation, we propose a Spatio-Temporal Filter Adaptive Network (STFAN) for the alignment and deblurring in a unified framework. The proposed STFAN takes both blurry and restored images of the previous frame as well as blurry image of the current frame as input, and dynamically generates the spatially adaptive filters for the alignment and deblurring. We then propose a new Filter Adaptive Convolutional (FAC) layers to align the deblurred features of the previous frame with the current frame and remove the spatially variant blur from the features of the current frame. Finally, we develop a reconstruction network which takes the fusion of two transformed features to restore the clear frames. Both quantitative and qualitative evaluation results on the benchmark datasets and real-world videos demonstrate that the proposed algorithm performs favorably against state-of-the-art methods in terms of accuracy, speed as well as model size.
Kernel (filter) prediction network (KPN) has recently witnessed rapid progress in low-level vision tasks. Jia @cite_48 first propose the dynamic filter network, which consists of a filter prediction network that predicts kernels conditioned on an input image, and a dynamic filtering layer that applies the generated kernels to another input. Their method shows the effectiveness on video and stereo prediction tasks. Niklaus @cite_16 apply kernel prediction network to video frame interpolation, which merges optical flow estimation and frame synthesis into a unified framework. To alleviate the demand for memories, they subsequently propose separable convolution @cite_13 which estimates two separable 1D kernels to approximate 2D kernels. In @cite_8 , they utilize KPN for both burst frame alignment and denoising, using the same predicted kernels. @cite_34 reconstructs high-resolution image from low-resolution input using generated dynamic upsampling filters. However, all the above methods directly apply the predicted kernels (filters) at the image domain. In addition, Wang @cite_41 propose a spatial feature transform (SFT) layer for image super-resolution. It generates transformation parameters for pixel-wise feature modulation, which can be considered as the KPN with a kernel size of @math in the feature domain.
{ "cite_N": [ "@cite_8", "@cite_41", "@cite_48", "@cite_34", "@cite_16", "@cite_13" ], "mid": [ "2963200935", "2795824235", "2414711238", "2798664922", "2604329646", "2742605348" ], "abstract": [ "We present a technique for jointly denoising bursts of images taken from a handheld camera. In particular, we propose a convolutional neural network architecture for predicting spatially varying kernels that can both align and denoise frames, a synthetic data generation approach based on a realistic noise formation model, and an optimization guided by an annealed loss function to avoid undesirable local minima. Our model matches or outperforms the state-of-the-art across a wide range of noise levels on both real and synthetic data.", "Despite that convolutional neural networks (CNN) have recently demonstrated high-quality reconstruction for single-image super-resolution (SR), recovering natural and realistic texture remains a challenging problem. In this paper, we show that it is possible to recover textures faithful to semantic classes. In particular, we only need to modulate features of a few intermediate layers in a single network conditioned on semantic segmentation probability maps. This is made possible through a novel Spatial Feature Transform (SFT) layer that generates affine transformation parameters for spatial-wise feature modulation. SFT layers can be trained end-to-end together with the SR network using the same loss function. During testing, it accepts an input image of arbitrary size and generates a high-resolution image with just a single forward pass conditioned on the categorical priors. Our final results show that an SR network equipped with SFT can generate more realistic and visually pleasing textures in comparison to state-of-the-art SRGAN and EnhanceNet.", "In a traditional convolutional layer, the learned filters stay fixed after training. In contrast, we introduce a new framework, the Dynamic Filter Network, where filters are generated dynamically conditioned on an input. We show that this architecture is a powerful one, with increased flexibility thanks to its adaptive nature, yet without an excessive increase in the number of model parameters. A wide variety of filtering operations can be learned this way, including local spatial transformations, but also others like selective (de)blurring or adaptive feature extraction. Moreover, multiple such layers can be combined, e.g. in a recurrent architecture. We demonstrate the effectiveness of the dynamic filter network on the tasks of video and stereo prediction, and reach state-of-the-art performance on the moving MNIST dataset with a much smaller model. By visualizing the learned filters, we illustrate that the network has picked up flow information by only looking at unlabelled training data. This suggests that the network can be used to pretrain networks for various supervised tasks in an unsupervised way, like optical flow and depth estimation.", "Video super-resolution (VSR) has become even more important recently to provide high resolution (HR) contents for ultra high definition displays. While many deep learning based VSR methods have been proposed, most of them rely heavily on the accuracy of motion estimation and compensation. We introduce a fundamentally different framework for VSR in this paper. We propose a novel end-to-end deep neural network that generates dynamic upsampling filters and a residual image, which are computed depending on the local spatio-temporal neighborhood of each pixel to avoid explicit motion compensation. With our approach, an HR image is reconstructed directly from the input image using the dynamic upsampling filters, and the fine details are added through the computed residual. Our network with the help of a new data augmentation technique can generate much sharper HR videos with temporal consistency, compared with the previous methods. We also provide analysis of our network through extensive experiments to show how the network deals with motions implicitly.", "Video frame interpolation typically involves two steps: motion estimation and pixel synthesis. Such a two-step approach heavily depends on the quality of motion estimation. This paper presents a robust video frame interpolation method that combines these two steps into a single process. Specifically, our method considers pixel synthesis for the interpolated frame as local convolution over two input frames. The convolution kernel captures both the local motion between the input frames and the coefficients for pixel synthesis. Our method employs a deep fully convolutional neural network to estimate a spatially-adaptive convolution kernel for each pixel. This deep neural network can be directly trained end to end using widely available video data without any difficult-to-obtain ground-truth data like optical flow. Our experiments show that the formulation of video interpolation as a single convolution process allows our method to gracefully handle challenges like occlusion, blur, and abrupt brightness change and enables high-quality video frame interpolation.", "Standard video frame interpolation methods first estimate optical flow between input frames and then synthesize an intermediate frame guided by motion. Recent approaches merge these two steps into a single convolution process by convolving input frames with spatially adaptive kernels that account for motion and re-sampling simultaneously. These methods require large kernels to handle large motion, which limits the number of pixels whose kernels can be estimated at once due to the large memory demand. To address this problem, this paper formulates frame interpolation as local separable convolution over input frames using pairs of 1D kernels. Compared to regular 2D kernels, the 1D kernels require significantly fewer parameters to be estimated. Our method develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously. Since our method is able to estimate kernels and synthesizes the whole video frame at once, it allows for the incorporation of perceptual loss to train the neural network to produce visually pleasing frames. This deep neural network is trained end-to-end using widely available video data without any human annotation. Both qualitative and quantitative experiments show that our method provides a practical solution to high-quality video frame interpolation." ] }
1904.12527
2941609697
In this work we study the semi-supervised framework of confidence set classification with controlled expected size in minimax settings. We obtain semi-supervised minimax rates of convergence under the margin assumption and a H o lder condition on the regression function. Besides, we show that if no further assumptions are made, there is no supervised method that outperforms the semi-supervised estimator proposed in this work. We establish that the best achievable rate for any supervised method is n^ --1 2 , even if the margin assumption is extremely favorable. On the contrary, semi-supervised estimators can achieve faster rates of convergence provided that sufficiently many unlabeled samples are available. We additionally perform numerical evaluation of the proposed algorithms empirically confirming our theoretical findings.
Confidence set approach for classification was pioneered by @cite_9 @cite_13 @cite_8 by the means of conformal prediction theory. They rely on non-conformity measures which are based on some pattern recognition methods, and develop an asymptotic theory. In this work, we consider a statistical perspective of confidence set classification and put our focus on non-asymptotic minimax theory.
{ "cite_N": [ "@cite_9", "@cite_13", "@cite_8" ], "mid": [ "1718891981", "2130104753", "1553101044" ], "abstract": [ "Transductive Confidence Machine (TCM) is a way of converting standard machine-learning algorithms into algorithms that output predictive regions rather than point predictions. It has been shown recently that TCM is well-calibrated when used in the on-line mode: at any confidence level 1 - ?, the long-run relative frequency of errors is guaranteed not to exceed ? provided the examples are generated independently from the same probability distribution P. Therefore, the number of \"uncertain\" predictive regions (i.e., those containing more than one label) becomes the sole measure of performance. The main result of this paper is that for any probability distribution P (assumed to generate the examples), it is possible to construct a TCM (guaranteed to be well-calibrated even if the assumption is wrong) that performs asymptotically as well as the best region predictor under P.", "Transductive Confidence Machine (TCM) and its computationally efficient modification, inductive confidence machine (ICM), are ways of complementing machine-learning algorithms with practically useful measures of confidence. We show that when TCM and ICM are used in the on-line mode, their confidence measures are well-calibrated, in the sense that predictive regions at confidence level 1- spl delta will be wrong with relative frequency at most spl delta (approaching spl delta in the case of randomised TCM and ICM) in the long run. This is not just an asymptotic phenomenon: actually the error probability of randomised TCM and ICM is d at every trial and errors happen independently at different trials.", "Algorithmic Learning in a Random World describes recent theoretical and experimental developments in building computable approximations to Kolmogorov's algorithmic notion of randomness. Based on these approximations, a new set of machine learning algorithms have been developed that can be used to make predictions and to estimate their confidence and credibility in high-dimensional spaces under the usual assumption that the data are independent and identically distributed (assumption of randomness). Another aim of this unique monograph is to outline some limits of predictions: The approach based on algorithmic theory of randomness allows for the proof of impossibility of prediction in certain situations. The book describes how several important machine learning problems, such as density estimation in high-dimensional spaces, cannot be solved if the only assumption is randomness." ] }
1904.12527
2941609697
In this work we study the semi-supervised framework of confidence set classification with controlled expected size in minimax settings. We obtain semi-supervised minimax rates of convergence under the margin assumption and a H o lder condition on the regression function. Besides, we show that if no further assumptions are made, there is no supervised method that outperforms the semi-supervised estimator proposed in this work. We establish that the best achievable rate for any supervised method is n^ --1 2 , even if the margin assumption is extremely favorable. On the contrary, semi-supervised estimators can achieve faster rates of convergence provided that sufficiently many unlabeled samples are available. We additionally perform numerical evaluation of the proposed algorithms empirically confirming our theoretical findings.
Instead of considering a fixed cost for rejection, which might be too restrictive, one may define two entities: probability of rejection and the probability of missclassification. In the spirit of conformal prediction, @cite_11 aims at minimizing the probability rejection provided a fixed upper bound on the probability of missclassification. In contrast, @cite_10 consider a reversed problem of minimizing the probability of missclassification given a fixed upper bound on the probability of rejection.
{ "cite_N": [ "@cite_10", "@cite_11" ], "mid": [ "2207876304", "1986593852" ], "abstract": [ "Confident prediction is highly relevant in machine learning; for example, in applications such as medical diagnoses, wrong prediction can be fatal. For classification, there already exist procedures that allow to not classify data when the confidence in their prediction is weak. This approach is known as classification with reject option. In the present paper, we provide new methodology for this approach. Predicting a new instance via a confidence set, we ensure an exact control of the probability of classification. Moreover, we show that this methodology is easily implementable and entails attractive theoretical and numerical properties.", "A framework for classification is developed with a notion of confidence. In this framework, a classifier consists of two tolerance regions in the predictor space, with a specified coverage level for each class. The classifier also produces an ambiguous region where the classification needs further investigation. Theoretical analysis reveals interesting structures of the confidence-ambiguity trade-off, and the optimal solution is characterized by extending the Neyman–Pearson lemma. We provide general estimating procedures, along with rates of convergence, based on estimates of the conditional probabilities. The method can be easily implemented with good robustness, as illustrated through theory, simulation and a data example." ] }
1904.12527
2941609697
In this work we study the semi-supervised framework of confidence set classification with controlled expected size in minimax settings. We obtain semi-supervised minimax rates of convergence under the margin assumption and a H o lder condition on the regression function. Besides, we show that if no further assumptions are made, there is no supervised method that outperforms the semi-supervised estimator proposed in this work. We establish that the best achievable rate for any supervised method is n^ --1 2 , even if the margin assumption is extremely favorable. On the contrary, semi-supervised estimators can achieve faster rates of convergence provided that sufficiently many unlabeled samples are available. We additionally perform numerical evaluation of the proposed algorithms empirically confirming our theoretical findings.
Interestingly, both approaches can be encompassed into the constrained estimation framework , where one would like to construct an estimator with some prescribed properties. These properties are typically reflected by the form of the risk which in our case is the discrepancy measure, that is, the sum of error and information discrepancies. Thus, both frameworks of @cite_14 @cite_3 can be seen as an extension of the constrained estimation to the classification problems. From the modeling point of view, we believe that the two frameworks can co-exist nicely and a particular choice depends on the considered application. The major difference between the present work and those by @cite_7 and @cite_14 is the minimax analysis which we provide here and our treatment of semi-supervised techniques.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_3" ], "mid": [ "2514278201", "2963185791", "" ], "abstract": [ "ABSTRACTIn most classification tasks, there are observations that are ambiguous and therefore difficult to correctly label. Set-valued classifiers output sets of plausible labels rather than a single label, thereby giving a more appropriate and informative treatment to the labeling of ambiguous instances. We introduce a framework for multiclass set-valued classification, where the classifiers guarantee user-defined levels of coverage or confidence (the probability that the true label is contained in the set) while minimizing the ambiguity (the expected size of the output). We first derive oracle classifiers assuming the true distribution to be known. We show that the oracle classifiers are obtained from level sets of the functions that define the conditional probability of each class. Then we develop estimators with good asymptotic and finite sample properties. The proposed estimators build on existing single-label classifiers. The optimal classifier can sometimes output the empty set, but we provide two ...", "Multiclass classification problems such as image annotation can involve a large number of classes. In this context, confusion between classes can occur, and single label classification may be misleading. We provide in the present paper a general device that, given an unlabeled dataset and a score function defined as the minimizer of some empirical and convex risk, outputs a set of class labels, instead of a single one. Interestingly, this procedure does not require that the unlabeled dataset explores the whole classes. Even more, the method is calibrated to control the expected size of the output set while minimizing the classification risk. We show the statistical optimality of the procedure and establish rates of convergence under the Tsybakov margin condition. It turns out that these rates are linear on the number of labels. We apply our methodology to convex aggregation of confidence sets based on the V-fold cross validation principle also known as the superlearning principle. We illustrate the numerical performance of the procedure on real data and demonstrate in particular that with moderate expected size, w.r.t. the number of labels, the procedure provides significant improvement of the classification risk.", "" ] }
1904.12527
2941609697
In this work we study the semi-supervised framework of confidence set classification with controlled expected size in minimax settings. We obtain semi-supervised minimax rates of convergence under the margin assumption and a H o lder condition on the regression function. Besides, we show that if no further assumptions are made, there is no supervised method that outperforms the semi-supervised estimator proposed in this work. We establish that the best achievable rate for any supervised method is n^ --1 2 , even if the margin assumption is extremely favorable. On the contrary, semi-supervised estimators can achieve faster rates of convergence provided that sufficiently many unlabeled samples are available. We additionally perform numerical evaluation of the proposed algorithms empirically confirming our theoretical findings.
On the other part, the confidence set estimation problem is directly related to the standard classification settings. This problem has been widely studied from a theoretical point of view in the binary classification framework. @cite_1 study the statistical performance of plug-in classification rules under assumptions which involve the smoothness of the regression function and the margin condition. In particular, they derive fast rates of convergence for plug-in classifiers based on local polynomial estimators and show their optimality in the minimax sense. One of the aim of present work is to extend these results to the confidence set classification framework.
{ "cite_N": [ "@cite_1" ], "mid": [ "1996437515" ], "abstract": [ "It has been recently shown that, under the margin (or low noise) assumption, there exist classifiers attaining fast rates of convergence of the excess Bayes risk, that is, rates faster than n -1 2 . The work on this subject has suggested the following two conjectures: (i) the best achievable fast rate is of the order n -1 , and (ii) the plug-in classifiers generally converge more slowly than the classifiers based on empirical risk minimization. We show that both conjectures are not correct. In particular, we construct plug-in classifiers that can achieve not only fast, but also super-fast rates, that is, rates faster than n -1 . We establish minimax lower bounds showing that the obtained rates cannot be improved." ] }
1904.12527
2941609697
In this work we study the semi-supervised framework of confidence set classification with controlled expected size in minimax settings. We obtain semi-supervised minimax rates of convergence under the margin assumption and a H o lder condition on the regression function. Besides, we show that if no further assumptions are made, there is no supervised method that outperforms the semi-supervised estimator proposed in this work. We establish that the best achievable rate for any supervised method is n^ --1 2 , even if the margin assumption is extremely favorable. On the contrary, semi-supervised estimators can achieve faster rates of convergence provided that sufficiently many unlabeled samples are available. We additionally perform numerical evaluation of the proposed algorithms empirically confirming our theoretical findings.
Another part of our work is to provide a comparison between supervised and semi-supervised procedures. Semi-supervised methods are studied in several papers and references therein. A simple intuition can be provided on whether one should or not expect a superior performance of the semi-supervised approach. Imagine a situation when the unlabeled sample @math is so large that one can approximate @math up to any desired precision, then, if the optimal decision is independent of @math , the semi-supervised estimators are not to be considered superior over the supervised estimation. This is the case in a lot of classical problems of statistics, where the inference is solely governed by the behavior of the conditional distribution @math (for instance regression or binary classification). The situation might be different once the optimal decision relies on the marginal distribution @math . In this case, as suggested by our findings, the semi-supervised approach might or not outperform the supervised one even in the context of the same problem. Similar conclusions were stated by @cite_15 in the context of learning under the cluster assumption .
{ "cite_N": [ "@cite_15" ], "mid": [ "2114718442" ], "abstract": [ "Empirical evidence shows that in favorable situations semi-supervised learning (SSL) algorithms can capitalize on the abundance of unlabeled training data to improve the performance of a learning task, in the sense that fewer labeled training data are needed to achieve a target error bound. However, in other situations unlabeled data do not seem to help. Recent attempts at theoretically characterizing SSL gains only provide a partial and sometimes apparently conflicting explanations of whether, and to what extent, unlabeled data can help. In this paper, we attempt to bridge the gap between the practice and theory of semi-supervised learning. We develop a finite sample analysis that characterizes the value of un-labeled data and quantifies the performance improvement of SSL compared to supervised learning. We show that there are large classes of problems for which SSL can significantly outperform supervised learning, in finite sample regimes and sometimes also in terms of error convergence rates." ] }
1904.12306
2940557525
Whole-body Control (WBC) has emerged as an important framework in locomotion control for legged robots. However, most of WBC frameworks fail to generalize beyond rigid terrains. Legged locomotion over soft terrain is difficult due to the presence of unmodeled contact dynamics that WBCs do not account for. This introduces uncertainty in locomotion and affects the stability and performance of the system. In this paper, we propose a novel soft terrain adaptation algorithm called STANCE: Soft Terrain Adaptation and Compliance Estimation. STANCE consists of a WBC that exploits the knowledge of the terrain to generate an optimal solution that is contact consistent and an online terrain compliance estimator that provides the WBC with terrain knowledge. We validated STANCE both in simulation and experiment on the Hydraulically actuated Quadruped (HyQ) robot, and we compared it against the state of the art WBC. We demonstrated the capabilities of STANCE with multiple terrains of different compliances, aggressive maneuvers, different forward velocities, and external disturbances. STANCE allowed HyQ to adapt online to terrains with different compliances (rigid and soft) without pre-tuning. HyQ was able to successfully deal with the transition between different terrains and showed the ability to differentiate between compliances under each foot.
In the same context, other works explicitly adapt to soft terrain by incorporating terrain knowledge (i.e., contact model) into their balancing controllers. For example, Azad al @cite_18 proposed a momentum based controller for balancing on soft terrain by relying on a nonlinear soft contact model. Vasilopoulos al @cite_32 proposed a similar hopping controller that models the terrain using a viscoplastic contact model. However, these approaches were only tested in simulation and for monopods.
{ "cite_N": [ "@cite_18", "@cite_32" ], "mid": [ "1622891528", "2793939639" ], "abstract": [ "This paper proposes a momentum-based balancing controller for robots which have non-rigid contacts with their environments. This controller regulates both linear momentum and angular momentum about the center of mass of the robot by controlling the contact forces. Compliant contact models are used to determine the contact forces at the contact points. Simulation results show the performance of the controller on a four-link planar robot standing on various compliant surfaces while unknown external forces in different directions are acting on the center of mass of the robot.", "Abstract One of the most intriguing research challenges in legged locomotion is robot performance on compliant terrains. The foot-terrain interaction is usually tackled by disregarding some of the effects of ground deformation, like permanent deformation and compaction; however this approach restricts their application to stiff environments. In this work, the foot-terrain interaction is studied, and used in developing a controller immune to terrain compliance. An impact dynamics model is developed, employing a viscoplastic extension of viscoelastic impact models, and used to study the performance of a monopod robot. To include the effects of compliance, a model of the robot that incorporates the description of the foot-terrain interaction is presented. A novel monopod controller immune to ground energy dissipation is developed, which does not require knowledge of ground parameters. The controller adapts to terrain changes quickly, successfully tackles the effects of slip during touchdown, and copes with the problems, which arise during hard impacts, as the terrain becomes stiffer. Simulation results demonstrate the validity of the developed analysis." ] }
1904.12306
2940557525
Whole-body Control (WBC) has emerged as an important framework in locomotion control for legged robots. However, most of WBC frameworks fail to generalize beyond rigid terrains. Legged locomotion over soft terrain is difficult due to the presence of unmodeled contact dynamics that WBCs do not account for. This introduces uncertainty in locomotion and affects the stability and performance of the system. In this paper, we propose a novel soft terrain adaptation algorithm called STANCE: Soft Terrain Adaptation and Compliance Estimation. STANCE consists of a WBC that exploits the knowledge of the terrain to generate an optimal solution that is contact consistent and an online terrain compliance estimator that provides the WBC with terrain knowledge. We validated STANCE both in simulation and experiment on the Hydraulically actuated Quadruped (HyQ) robot, and we compared it against the state of the art WBC. We demonstrated the capabilities of STANCE with multiple terrains of different compliances, aggressive maneuvers, different forward velocities, and external disturbances. STANCE allowed HyQ to adapt online to terrains with different compliances (rigid and soft) without pre-tuning. HyQ was able to successfully deal with the transition between different terrains and showed the ability to differentiate between compliances under each foot.
In the context of locomotion planning, Grandia al @cite_7 indirectly adapted to soft terrain by shaping the frequency of the cost function of their mpc formulation. By penalizing high frequencies, they generated optimal motion plans that respect the bandwidth limitations due to soft terrain. This approach was tested over three types of terrain compliances. However, it was not tested during transitions from one terrain to another. This approach showed an improvement in the performance of the quadruped robot in simulation and experiment. However, the authors did not offer the possibility to change their tuning parameters online. Thus, they were not able to adapt the locomotion strategy based on the compliance of the terrain.
{ "cite_N": [ "@cite_7" ], "mid": [ "2890276699" ], "abstract": [ "Transferring solutions found by trajectory optimization to robotic hardware remains a challenging task. When the optimization fully exploits the provided model to perform dynamic tasks, the presence of unmodeled dynamics renders the motion infeasible on the real system. Model errors cannot be only a result of model simplifications, but also naturally arise when deploying the robot in unstructured and nondeterministic environments. Predominantly, compliant contacts and actuator dynamics lead to bandwidth limitations. While classical control methods provide tools to synthesize controllers that are robust to a class of model errors, such a notion is missing in modern trajectory optimization, which is solved in the time domain. We propose frequency-shaped cost functions to achieve robust solutions in the context of optimal control for legged robots. Through simulation and hardware experiments we show that motion plans can be made compatible with bandwidth limits set by actuators and contact dynamics. The smoothness of the model predictive solutions can be continuously tuned without compromising the feasibility of the problem. Experiments with the quadrupedal robot ANYmal, which is driven by highly compliant series elastic actuators, showed significantly improved tracking performance of the planned motion, torque, and force trajectories and enabled the machine to walk robustly on terrain with unmodeled compliance." ] }
1904.12306
2940557525
Whole-body Control (WBC) has emerged as an important framework in locomotion control for legged robots. However, most of WBC frameworks fail to generalize beyond rigid terrains. Legged locomotion over soft terrain is difficult due to the presence of unmodeled contact dynamics that WBCs do not account for. This introduces uncertainty in locomotion and affects the stability and performance of the system. In this paper, we propose a novel soft terrain adaptation algorithm called STANCE: Soft Terrain Adaptation and Compliance Estimation. STANCE consists of a WBC that exploits the knowledge of the terrain to generate an optimal solution that is contact consistent and an online terrain compliance estimator that provides the WBC with terrain knowledge. We validated STANCE both in simulation and experiment on the Hydraulically actuated Quadruped (HyQ) robot, and we compared it against the state of the art WBC. We demonstrated the capabilities of STANCE with multiple terrains of different compliances, aggressive maneuvers, different forward velocities, and external disturbances. STANCE allowed HyQ to adapt online to terrains with different compliances (rigid and soft) without pre-tuning. HyQ was able to successfully deal with the transition between different terrains and showed the ability to differentiate between compliances under each foot.
In contrast to the aforementioned work, other approaches relax the rigid ground assumption (hard contact constraint) but not for soft terrain adaptation purposes. For instance, Kim al @cite_5 implemented an approach to handle sudden changes in the rigid contact interaction. This approach relaxed the hard contact assumption in their wbc formulation by penalizing the contact interaction in the cost function rather than incorporating it as a hard constraint. For computational purposes, Neunert al @cite_26 and Doshi al @cite_14 proposed relaxing the rigid ground assumption. Neunert al used a soft contact model in their nonlinear mpc formulation to provide smooth gradients of the contact dynamics to be more efficiently solved by their gradient based solver. The soft contact model did not have a physical meaning and the contact parameters were empirically chosen. Doshi al proposed a similar approach which incorporates a slack variable that expands the feasibility region of the hard constraint.
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_26" ], "mid": [ "2914653242", "2951303896", "2771691050" ], "abstract": [ "Whole-body control (WBC) is a generic task-oriented control method for feedback control of loco-manipulation behaviors in humanoid robots. The combination of WBC and model-based walking controllers has been widely utilized in various humanoid robots. However, to date, the WBC method has not been employed for unsupported passive-ankle dynamic locomotion. As such, in this paper, we devise a new WBC, dubbed whole-body locomotion controller (WBLC), that can achieve experimental dynamic walking on unsupported passive-ankle biped robots. A key aspect of WBLC is the relaxation of contact constraints such that the control commands produce reduced jerk when switching foot contacts. To achieve robust dynamic locomotion, we conduct an in-depth analysis of uncertainty for our dynamic walking algorithm called time-to-velocity-reversal (TVR) planner. The uncertainty study is fundamental as it allows us to improve the control algorithms and mechanical structure of our robot to fulfill the tolerated uncertainty. In addition, we conduct extensive experimentation for: 1) unsupported dynamic balancing (i.e. in-place stepping) with a six degree-of-freedom (DoF) biped, Mercury; 2) unsupported directional walking with Mercury; 3) walking over an irregular and slippery terrain with Mercury; and 4) in-place walking with our newly designed ten-DoF viscoelastic liquid-cooled biped, DRACO. Overall, the main contributions of this work are on: a) achieving various modalities of unsupported dynamic locomotion of passive-ankle bipeds using a WBLC controller and a TVR planner, b) conducting an uncertainty analysis to improve the mechanical structure and the controllers of Mercury, and c) devising a whole-body control strategy that reduces movement jerk during walking.", "Planning locomotion trajectories for legged microrobots is challenging because of their complex morphology, high frequency passive dynamics, and discontinuous contact interactions with their environment. Consequently, such research is often driven by time-consuming experimental methods. As an alternative, we present a framework for systematically modeling, planning, and controlling legged microrobots. We develop a three-dimensional dynamic model of a 1.5 gram quadrupedal microrobot with complexity (e.g., number of degrees of freedom) similar to larger-scale legged robots. We then adapt a recently developed variational contact-implicit trajectory optimization method to generate feasible whole-body locomotion plans for this microrobot, and we demonstrate that these plans can be tracked with simple joint-space controllers. We plan and execute periodic gaits at multiple stride frequencies and on various surfaces. These gaits achieve high per-cycle velocities, including a maximum of 10.87 mm cycle, which is 15 faster than previously measured velocities for this microrobot. Furthermore, we plan and execute a vertical jump of 9.96 mm, which is 78 of the microrobot's center-of-mass height. To the best of our knowledge, this is the first end-to-end demonstration of planning and tracking whole-body dynamic locomotion on a millimeter-scale legged microrobot.", "In this letter, we present a whole-body nonlinear model predictive control approach for rigid body systems subject to contacts. We use a full-dynamic system model which also includes explicit contact dynamics. Therefore, contact locations, sequences, and timings are not prespecified but optimized by the solver. Yet, using numerical and software engineering allows for running the nonlinear Optimal Control solver at rates up to 190 Hz on a quadruped for a time horizon of half a second. This outperforms the state-of-the-art by at least one order of magnitude. Hardware experiments in the form of periodic and nonperiodic tasks are applied to two quadrupeds with different actuation systems. The obtained results underline the performance, transferability, and robustness of the approach." ] }
1904.12306
2940557525
Whole-body Control (WBC) has emerged as an important framework in locomotion control for legged robots. However, most of WBC frameworks fail to generalize beyond rigid terrains. Legged locomotion over soft terrain is difficult due to the presence of unmodeled contact dynamics that WBCs do not account for. This introduces uncertainty in locomotion and affects the stability and performance of the system. In this paper, we propose a novel soft terrain adaptation algorithm called STANCE: Soft Terrain Adaptation and Compliance Estimation. STANCE consists of a WBC that exploits the knowledge of the terrain to generate an optimal solution that is contact consistent and an online terrain compliance estimator that provides the WBC with terrain knowledge. We validated STANCE both in simulation and experiment on the Hydraulically actuated Quadruped (HyQ) robot, and we compared it against the state of the art WBC. We demonstrated the capabilities of STANCE with multiple terrains of different compliances, aggressive maneuvers, different forward velocities, and external disturbances. STANCE allowed HyQ to adapt online to terrains with different compliances (rigid and soft) without pre-tuning. HyQ was able to successfully deal with the transition between different terrains and showed the ability to differentiate between compliances under each foot.
Despite the improvement in performance of the legged robots over soft terrain in the aforementioned works, none of them offered the possibility to adapt to the terrain . Most of the aforementioned works lack a general approach that can deal with multiple terrain compliances or with transitions between them. Perhaps, one noticeable work (to date) in online soft terrain adaptation was proposed by Chang al @cite_4 . In that work, an iterative soft terrain adaptation approach was proposed. The approach relies on a non-parametric contact model that is simultaneously updated alongside an optimization based hopping controller. The approach was capable of iteratively learning the terrain interaction and supplying that knowledge to the optimal controller. However, because the learning module was exploiting Gaussian process regression, which is computationally expensive, the approach did not reach realtime performance and was only tested in simulation, for one leg, under one experimental condition (one terrain).
{ "cite_N": [ "@cite_4" ], "mid": [ "2737345556" ], "abstract": [ "The varied and complex dynamics of deformable terrain are significant impediments toward real-world viability of locomotive robotics, particularly for legged machines. We explore vertical jumping on granular media (GM) as a model task for legged locomotion on uncharacterized deformable terrain. By integrating (Gaussian process) GP-based regression and evaluation to estimate ground forcing as a function of state, a one-dimensional jumper acquires the ability to learn forcing profiles exerted by its environment in tandem to achieving its control objective. The GP-based dynamical model initially assumes a baseline rigid, non-compliant surface. As part of an iterative procedure, the optimizer employing this model generates an optimal control to achieve a target jump height while respecting known hardware limitations of the robot model. Trajectory and forcing data recovered from evaluation on the true GM surface model simulation is applied to train the GP, and in turn, provide the optimizer a more richly informed dynamical model of the environment. After three iterations, predicted optimal control trajectories coincide with execution results, within 1.2 jumping height error, as the GP-based approximation converges to the true GM model." ] }
1904.12306
2940557525
Whole-body Control (WBC) has emerged as an important framework in locomotion control for legged robots. However, most of WBC frameworks fail to generalize beyond rigid terrains. Legged locomotion over soft terrain is difficult due to the presence of unmodeled contact dynamics that WBCs do not account for. This introduces uncertainty in locomotion and affects the stability and performance of the system. In this paper, we propose a novel soft terrain adaptation algorithm called STANCE: Soft Terrain Adaptation and Compliance Estimation. STANCE consists of a WBC that exploits the knowledge of the terrain to generate an optimal solution that is contact consistent and an online terrain compliance estimator that provides the WBC with terrain knowledge. We validated STANCE both in simulation and experiment on the Hydraulically actuated Quadruped (HyQ) robot, and we compared it against the state of the art WBC. We demonstrated the capabilities of STANCE with multiple terrains of different compliances, aggressive maneuvers, different forward velocities, and external disturbances. STANCE allowed HyQ to adapt online to terrains with different compliances (rigid and soft) without pre-tuning. HyQ was able to successfully deal with the transition between different terrains and showed the ability to differentiate between compliances under each foot.
For contact compliance estimation, we need to accurately model the contact dynamics and estimate the contact parameters online. In contact modeling, Alves al @cite_13 presented a detailed overview of the types of parametric soft contact models used in the literature. In compliance estimation, Schindeler al @cite_3 used a two stage polynomial identification approach to estimate the parameters of the hc contact model online. Differently, Azad al @cite_11 used a ls -based estimation algorithm and compared multiple contact models (including the kv and the hc models). Other approaches that are not based on soft contact models, use force observers @cite_31 or neural networks @cite_9 . These aforementioned approaches in compliance estimation were designed for robotic manipulation tasks.
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_31", "@cite_13", "@cite_11" ], "mid": [ "2023291718", "2793379827", "2076234838", "2166185271", "2535684174" ], "abstract": [ "In many constrained robotic tasks, accurate identification of contact parameters by online estimation algorithms is beneficial, for purposes such as control law adaptation or environment mapping. Stiffness estimation is a key problem for tasks with low dynamics, as incorrect stiffness parameterization in control laws can induce significant deviations from the desired behavior, up to the point of instability. Sensorial requirements for traditional stiffness identification methods include precise knowledge of object positions w.r.t. the robot, which can be difficult to obtain in practice. Accurate dynamic models of the manipulator are also often necessary. This motivates the search for alternative approaches. In this paper, we propose ANNE (Artificial Neural Network Estimator), that addresses the stiffness identification problem without the need for explicit modeling and using only force based inputs. ANNE is validated in teleoperation experiments involving WAM robot interactions with real and virtual objects.", "Online environment dynamic estimates are often used for the control of robots, telerobots, and haptic systems. The nonlinear Hunt–Crossley (HC) model, which is physically consistent with the behavior of soft objects with limited deformation at a single point of contact, is being increasingly used in robotic control systems. The HC model can be identified online using a single-stage log linearization technique; however, the accuracy and applicability of the existing method is limited. We propose a two-stage polynomial identification method, which uses a quadratic approximation in the first stage to generate a linearly parameterized model of the HC dynamics (Quad-Poly). The coefficients of the Quad-Poly model are then used in the second stage to extract the HC parameters using a lookup table and recursive least squares parameter estimation. The proposed method is experimentally assessed against a previous natural logarithm linearization method, and further tested for time-varying environment dynamics and human-generated trajectories and for robustness against uncertainties in the measured data and system parameters.", "Abstract In this paper we propose an online stiffness estimation technique for robotic tasks based only on force data, therefore, not requiring contact position information. This allows estimations to be obtained in robotic tasks involving interactions with unstructured and unknown environments where geometrical data is unavailable or unreliable. Our technique – the Candidate Observer Based Algorithm (COBA) – uses two force observers, configured with different candidate stiffnesses, to estimate online the actual target object stiffness. COBA is embedded in a force control architecture with computed torque in the task space. The theoretical presentation of the algorithm, as well as simulation tests and experimental results with a lightweight robot arm are also presented.", "Abstract The nature of the constitutive contact force law utilized to describe contact–impact events in solid contact interfaces plays a key role in predicting the response of multibody mechanical systems and in the simulation of engineering applications. The goal of this work is to present a comparative study on the most relevant existing viscoelastic contact force models. In the sequel of this process, their fundamental characteristics are examined and their performances evaluated. Models developed based on the Hertz contact theory and augmented with a damping term to accommodate the dissipation of energy during the impact process, which typically is a function of the coefficient of restitution between the contacting solids, are considered in this study. In particular, the identified contact force models are compared in the present study for simple solid impact problems with the sole purpose of comparing the performance of the various models and examining the corresponding system behavior. The outcomes indicate that the prediction of the dynamic behavior of contacting solids strongly depends on the selection of the contact force model.", "This paper proposes a method to realize desired contact normal forces between humanoids and their compliant environment. By using contact models, desired contact forces are converted to desired deformations of compliant surfaces. To achieve desired forces, deformations are controlled by controlling the contact point positions. Parameters of contact models are assumed to be known or estimated using the approach described in this paper. The proposed methods for estimating the contact parameters and controlling the contact normal force are implemented on a LWR KUKA IV arm. To verify both methods, experiments are performed with the KUKA arm while its end-effector is in contact with two different soft objects." ] }
1904.12306
2940557525
Whole-body Control (WBC) has emerged as an important framework in locomotion control for legged robots. However, most of WBC frameworks fail to generalize beyond rigid terrains. Legged locomotion over soft terrain is difficult due to the presence of unmodeled contact dynamics that WBCs do not account for. This introduces uncertainty in locomotion and affects the stability and performance of the system. In this paper, we propose a novel soft terrain adaptation algorithm called STANCE: Soft Terrain Adaptation and Compliance Estimation. STANCE consists of a WBC that exploits the knowledge of the terrain to generate an optimal solution that is contact consistent and an online terrain compliance estimator that provides the WBC with terrain knowledge. We validated STANCE both in simulation and experiment on the Hydraulically actuated Quadruped (HyQ) robot, and we compared it against the state of the art WBC. We demonstrated the capabilities of STANCE with multiple terrains of different compliances, aggressive maneuvers, different forward velocities, and external disturbances. STANCE allowed HyQ to adapt online to terrains with different compliances (rigid and soft) without pre-tuning. HyQ was able to successfully deal with the transition between different terrains and showed the ability to differentiate between compliances under each foot.
To date, the only work on compliance estimation in legged locomotion was the one by Bosworth al @cite_1 . The authors presented two online (in-situ) approaches to estimate the ground properties (stiffness and friction). The results were promising and the approaches were validated on a quadruped robot while hopping over rigid and soft terrain. However, the estimated stiffness showed a trend, but was not accurate; the lab measurements of the terrain stiffness did not match the in-situ ones. Although the estimation algorithms could be implemented online, the robot had to stop what it was doing to perform the estimation.
{ "cite_N": [ "@cite_1" ], "mid": [ "2411824964" ], "abstract": [ "Dynamic behavior of legged robots is strongly affected by ground impedance. Empirical observation of robot hardware is needed because ground impedance and foot-ground interaction is challenging to predict in simulation. This paper presents experimental data of the MIT Super Mini Cheetah robot hopping on hard and soft ground. We show that controllers tuned for each surface perform better for each specific surface type, assessing performance using measurements of 1.) stability of the robot in response to self-disturbances applied by the robot onto itself and 2.) the peak accelerations of the robot that occur during ground impact, which should be minimized to reduce mechanical stress. To aid in controller selection on different ground types, we show that the robot can measure ground stiffness and friction in-situ by measuring its own interaction with the ground. To motivate future work in variable-terrain control and in-situ ground measurement, we show preliminary results of running gaits that transition between hard and soft ground." ] }
1904.12589
2940497977
In this paper, we propose a novel deep learning architecture for joint classification and localization of abnormalities in mammograms. We first assume a weakly supervised setting and present a new approach with data driven decisions. This novel network combines two learning branches with region-level classification and region ranking. The network provides a global classification of the image into multiple classes, such as malignant, benign or normal. Our method further enables the localization of abnormalities as global class discriminative regions in full mammogram resolution. Next, we extend this method to a semi-supervised setting that engages a small set of local annotations, using a novel architecture, and a multi-task objective function. We present the impact of the local annotations on several performance measures, including localization, to evaluate the cost effectiveness of lesion annotation effort. Our evaluation is made over a large multi-center mammography dataset of @math 3,000 mammograms with various findings. Experimental results demonstrate the capabilities and advantages of the proposed method over previous weakly-supervised strategies, and the impact of semi-supervised learning. We show that targeting the annotation of only 5 of the images can significantly boost performance.
Hwang al @cite_29 took, also, an image based approach using a CNN with two whole-image classification branches that shared convolution layers. One branch used fully connected layers, and the second branch used @math convolution layers, resulting in a map per class, and then a global max pooling on each map. Their method yielded a low AUROC of 0.65 over 332 MIAS mammograms.
{ "cite_N": [ "@cite_29" ], "mid": [ "2524608787" ], "abstract": [ "Recent advances of deep learning have achieved remarkable performances in various computer vision tasks including weakly supervised object localization. Weakly supervised object localization is practically useful since it does not require fine-grained annotations. Current approaches overcome the difficulties of weak supervision via transfer learning from pre-trained models on large-scale general images such as ImageNet. However, they cannot be utilized for medical image domain in which do not exist such priors. In this work, we present a novel weakly supervised learning framework for lesion localization named as self-transfer learning (STL). STL jointly optimizes both classification and localization networks to help the localization network focus on correct lesions without any types of priors. We evaluate STL framework over chest X-rays and mammograms, and achieve significantly better localization performance compared to previous weakly supervised localization approaches." ] }
1904.12589
2940497977
In this paper, we propose a novel deep learning architecture for joint classification and localization of abnormalities in mammograms. We first assume a weakly supervised setting and present a new approach with data driven decisions. This novel network combines two learning branches with region-level classification and region ranking. The network provides a global classification of the image into multiple classes, such as malignant, benign or normal. Our method further enables the localization of abnormalities as global class discriminative regions in full mammogram resolution. Next, we extend this method to a semi-supervised setting that engages a small set of local annotations, using a novel architecture, and a multi-task objective function. We present the impact of the local annotations on several performance measures, including localization, to evaluate the cost effectiveness of lesion annotation effort. Our evaluation is made over a large multi-center mammography dataset of @math 3,000 mammograms with various findings. Experimental results demonstrate the capabilities and advantages of the proposed method over previous weakly-supervised strategies, and the impact of semi-supervised learning. We show that targeting the annotation of only 5 of the images can significantly boost performance.
These methods deal with fusion of weak labels and a subset of data with local annotations, namely fully labeled (also known as strongly labeled) data. @cite_30 recently proposed a method for training Fast RCNN @cite_35 via Expectation-Maximization (EM). Focusing on the detection problem, they treated instance-level labels as missing data for weakly annotated images. Their method alternated between two steps: 1) E-step: estimating a probability distribution over all possible latent locations, and 2) M-step: updating a CNN using estimated locations from the last E-step. In the M-step, they optimize the sum of a region-level likelihood function from the fully supervised images and the estimation from the E-step. Their method was applied on non-medical (natural) images, and in practice, the quality of the final solution depended heavily on the initialization by another method ( @cite_24 , which we compare our method with). Furthermore, their approach required thousands of Fast RCNN training iterations at each M-step, that is computationally expensive, particularly for large images such as mammograms.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_24" ], "mid": [ "2592384071", "", "2963603913" ], "abstract": [ "Object detection when provided image-level labels instead of instance-level labels (i.e., bounding boxes) during training is an important problem in computer vision, since large scale image datasets with instance-level labels are extremely costly to obtain. In this paper, we address this challenging problem by developing an Expectation-Maximization (EM) based object detection method using deep convolutional neural networks (CNNs). Our method is applicable to both the weakly-supervised and semi-supervised settings. Extensive experiments on PASCAL VOC 2007 benchmark show that (1) in the weakly supervised setting, our method provides significant detection performance improvement over current state-of-the-art methods, (2) having access to a small number of strongly (instance-level) annotated images, our method can almost match the performace of the fully supervised Fast RCNN. We share our source code at this https URL", "", "Weakly supervised learning of object detection is an important problem in image understanding that still does not have a satisfactory solution. In this paper, we address this problem by exploiting the power of deep convolutional neural networks pre-trained on large-scale image-level classification tasks. We propose a weakly supervised deep detection architecture that modifies one such network to operate at the level of image regions, performing simultaneously region selection and classification. Trained as an image classifier, the architecture implicitly learns object detectors that are better than alternative weakly supervised detection systems on the PASCAL VOC data. The model, which is a simple and elegant end-to-end architecture, outperforms standard data augmentation and fine-tuning techniques for the task of image-level classification as well." ] }
1904.12589
2940497977
In this paper, we propose a novel deep learning architecture for joint classification and localization of abnormalities in mammograms. We first assume a weakly supervised setting and present a new approach with data driven decisions. This novel network combines two learning branches with region-level classification and region ranking. The network provides a global classification of the image into multiple classes, such as malignant, benign or normal. Our method further enables the localization of abnormalities as global class discriminative regions in full mammogram resolution. Next, we extend this method to a semi-supervised setting that engages a small set of local annotations, using a novel architecture, and a multi-task objective function. We present the impact of the local annotations on several performance measures, including localization, to evaluate the cost effectiveness of lesion annotation effort. Our evaluation is made over a large multi-center mammography dataset of @math 3,000 mammograms with various findings. Experimental results demonstrate the capabilities and advantages of the proposed method over previous weakly-supervised strategies, and the impact of semi-supervised learning. We show that targeting the annotation of only 5 of the images can significantly boost performance.
In @cite_31 , Cinbis al suggest a MIL approach for weakly supervised detection. They proposed to extend their method to semi-supervised setting by replacing the top region selection obtained from MIL with the ground-truth regions when training from fully-supervised images.
{ "cite_N": [ "@cite_31" ], "mid": [ "2133324800" ], "abstract": [ "Object category localization is a challenging problem in computer vision. Standard supervised training requires bounding box annotations of object instances. This time-consuming annotation process is sidestepped in weakly supervised learning. In this case, the supervised information is restricted to binary labels that indicate the absence presence of object instances in the image, without their locations. We follow a multiple-instance learning approach that iteratively trains the detector and infers the object locations in the positive training images. Our main contribution is a multi-fold multiple instance learning procedure, which prevents training from prematurely locking onto erroneous object locations. This procedure is particularly important when using high-dimensional representations, such as Fisher vectors and convolutional neural network features. We also propose a window refinement method, which improves the localization accuracy by incorporating an objectness prior. We present a detailed experimental evaluation using the PASCAL VOC 2007 dataset, which verifies the effectiveness of our approach." ] }
1904.12589
2940497977
In this paper, we propose a novel deep learning architecture for joint classification and localization of abnormalities in mammograms. We first assume a weakly supervised setting and present a new approach with data driven decisions. This novel network combines two learning branches with region-level classification and region ranking. The network provides a global classification of the image into multiple classes, such as malignant, benign or normal. Our method further enables the localization of abnormalities as global class discriminative regions in full mammogram resolution. Next, we extend this method to a semi-supervised setting that engages a small set of local annotations, using a novel architecture, and a multi-task objective function. We present the impact of the local annotations on several performance measures, including localization, to evaluate the cost effectiveness of lesion annotation effort. Our evaluation is made over a large multi-center mammography dataset of @math 3,000 mammograms with various findings. Experimental results demonstrate the capabilities and advantages of the proposed method over previous weakly-supervised strategies, and the impact of semi-supervised learning. We show that targeting the annotation of only 5 of the images can significantly boost performance.
Another line of studies use first a large data set of fully labeled data with lesion annotations to train a region based classifier. Then, at a subsequent stage the model is modified for whole image input (usually decomposed to regions) and fine-tuned on the weakly labeled data, to create a weakly labeled classifier @cite_18 @cite_14 @cite_1 . However, these methods strongly rely on local annotations and need a sufficiently large fully labeled data set to initialize the model. They are unable to train purely on weakly labeled mammograms and often lack of detection capability (except @cite_18 with detection based on instance labels). In our approach the local annotations are used as auxiliary data, and our model can be trained with a small fully annotated data set, mostly relying on weak labels. Due to the annotation cost in many medical domains, we believe that this approach suggests a competitive edge.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_1" ], "mid": [ "2964189045", "2751658105", "2736374171" ], "abstract": [ "In the last two decades, Computer Aided Detection (CAD) systems were developed to help radiologists analyse screening mammograms, however benefits of current CAD technologies appear to be contradictory, therefore they should be improved to be ultimately considered useful. Since 2012, deep convolutional neural networks (CNN) have been a tremendous success in image recognition, reaching human performance. These methods have greatly surpassed the traditional approaches, which are similar to currently used CAD solutions. Deep CNN-s have the potential to revolutionize medical image analysis. We propose a CAD system based on one of the most successful object detection frameworks, Faster R-CNN. The system detects and classifies malignant or benign lesions on a mammogram without any human intervention. The proposed method sets the state of the art classification performance on the public INbreast database, AUC = 0.95. The approach described here has achieved 2nd place in the Digital Mammography DREAM Challenge with AUC = 0.85. When used as a detector, the system reaches high sensitivity with very few false positive marks per image on the INbreast dataset. Source code, the trained model and an OsiriX plugin are published online at https: github.com riblidezso frcnn_cad .", "We develop an end-to-end training algorithm for whole-image breast cancer diagnosis based on mammograms. It requires lesion annotations only at the first stage of training. After that, a whole image classifier can be trained using only image level labels. This greatly reduced the reliance on lesion annotations. Our approach is implemented using an all convolutional design that is simple yet provides superior performance in comparison with the previous methods. On DDSM, our best single-model achieves a per-image AUC score of 0.88 and three-model averaging increases the score to 0.91. On INbreast, our best single-model achieves a per-image AUC score of 0.96. Using DDSM as benchmark, our models compare favorably with the current state-of-the-art. We also demonstrate that a whole image model trained on DDSM can be easily transferred to INbreast without using its lesion annotations and using only a small amount of training data. Code availability: this https URL", "Screening mammography is an important front-line tool for the early detection of breast cancer, and some 39 million exams are conducted each year in the United States alone. Here, we describe a multi-scale convolutional neural network (CNN) trained with a curriculum learning strategy that achieves high levels of accuracy in classifying mammograms. Specifically, we first train CNN-based patch classifiers on segmentation masks of lesions in mammograms, and then use the learned features to initialize a scanning-based model that renders a decision on the whole image, trained end-to-end on outcome data. We demonstrate that our approach effectively handles the “needle in a haystack” nature of full-image mammogram classification, achieving 0.92 AUROC on the DDSM dataset." ] }
1904.12274
2901373116
Subspace clustering is a problem of exploring the low-dimensional subspaces of high-dimensional data. State-of-the-art approaches are designed by following the model of spectral clustering-based method. These methods pay much attention to learn the representation matrix to construct a suitable similarity matrix and overlook the influence of the noise term on subspace clustering. However, the real data are always contaminated by the noise and the noise usually has a complicated statistical distribution. To alleviate this problem, in this paper, we propose a subspace clustering method based on Cauchy loss function (CLF). Particularly, it uses CLF to penalize the noise term for suppressing the large noise mixed in the real data. This is due to that the CLF’s influence function has an upper bound that can alleviate the influence of a single sample, especially the sample with a large noise, on estimating the residuals. Furthermore, we theoretically prove the grouping effect of our proposed method, which means that highly correlated data can be grouped together. Finally, experimental results on five real data sets reveal that our proposed method outperforms several representative clustering methods.
Sparse Subspace Clustering (SSC) @cite_39 , as a first proposed spectral clustering based method, aims to find the sparsest representation for each point with all other points in a union of subspaces by solving the following problem: where @math is a weighting factor to balance two terms. @math is used to avoid the solution @math being an identity matrix, which means that one point can not be reconstructed using itself. As we all known, solving such sparse representation is a NP hard problem. So SSC uses @math norm to approximate the @math norm. The final objective function is given below: SSC assumes that one point can be reconstructed only using few points in the same subspace. When the data are drawn from independent subspaces, SSC can divide the points into their subspaces. But for the real data, the representation matrix of SSC may be too sparse to capture the relationship between points in the same subspace. Based on SSC, Wang and Xu @cite_44 proposed a modified version, named Noisy Sparse Subspace Clustering (NSSC), to deal with noisy data.
{ "cite_N": [ "@cite_44", "@cite_39" ], "mid": [ "2615383372", "2003217181" ], "abstract": [ "This paper considers the problem of subspace clustering under noise. Specifically, we study the behavior of Sparse Subspace Clustering (SSC) when either adversarial or random noise is added to the unlabeled input data points, which are assumed to be in a union of low-dimensional subspaces. We show that a modified version of SSC is provably effective in correctly identifying the underlying subspaces, even with noisy data. This extends theoretical guarantee of this algorithm to more practical settings and provides justification to the success of SSC in a class of real applications.", "We propose a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all other data points. In general, finding such a SR is NP hard. Our key contribution is to show that, under mild assumptions, the SR can be obtained exactly' by using l1 optimization. The segmentation of the data is obtained by applying spectral clustering to a similarity matrix built from this SR. Our method can handle noise, outliers as well as missing data. We apply our subspace clustering algorithm to the problem of segmenting multiple motions in video. Experiments on 167 video sequences show that our approach significantly outperforms state-of-the-art methods." ] }
1904.12274
2901373116
Subspace clustering is a problem of exploring the low-dimensional subspaces of high-dimensional data. State-of-the-art approaches are designed by following the model of spectral clustering-based method. These methods pay much attention to learn the representation matrix to construct a suitable similarity matrix and overlook the influence of the noise term on subspace clustering. However, the real data are always contaminated by the noise and the noise usually has a complicated statistical distribution. To alleviate this problem, in this paper, we propose a subspace clustering method based on Cauchy loss function (CLF). Particularly, it uses CLF to penalize the noise term for suppressing the large noise mixed in the real data. This is due to that the CLF’s influence function has an upper bound that can alleviate the influence of a single sample, especially the sample with a large noise, on estimating the residuals. Furthermore, we theoretically prove the grouping effect of our proposed method, which means that highly correlated data can be grouped together. Finally, experimental results on five real data sets reveal that our proposed method outperforms several representative clustering methods.
Low-Rank Representation (LRR) @cite_7 was proposed to capture the correlation structure of the data by finding a low-rank representation of the samples instead of a sparse one. The original problem of LRR is formulated as The above optimization problem is hard to be solved due to the discrete nature of the rank function. So LRR adopts the nuclear norm as a surrogate of the rank function. Furthermore, LRR uses @math norm to deal with the noise term for improving its robustness to the noise and outliers. The subspace clustering problem becomes However, there is no theoretical analysis about the importance of low rank property of the representation matrix @math for subspace clustering. Besides, the solution @math may be very dense and far from block-diagonal.
{ "cite_N": [ "@cite_7" ], "mid": [ "1997201895" ], "abstract": [ "In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way." ] }
1904.12274
2901373116
Subspace clustering is a problem of exploring the low-dimensional subspaces of high-dimensional data. State-of-the-art approaches are designed by following the model of spectral clustering-based method. These methods pay much attention to learn the representation matrix to construct a suitable similarity matrix and overlook the influence of the noise term on subspace clustering. However, the real data are always contaminated by the noise and the noise usually has a complicated statistical distribution. To alleviate this problem, in this paper, we propose a subspace clustering method based on Cauchy loss function (CLF). Particularly, it uses CLF to penalize the noise term for suppressing the large noise mixed in the real data. This is due to that the CLF’s influence function has an upper bound that can alleviate the influence of a single sample, especially the sample with a large noise, on estimating the residuals. Furthermore, we theoretically prove the grouping effect of our proposed method, which means that highly correlated data can be grouped together. Finally, experimental results on five real data sets reveal that our proposed method outperforms several representative clustering methods.
Least Squares Regression (LSR) @cite_15 employs the Frobenius norm to handle the representation matrix and the noise matrix simultaneously. The corresponding optimization problem is defined as Note that the above problem can be efficiently solved. The main contribution of LSR is that it encourages grouping effect which can group highly correlated data together.
{ "cite_N": [ "@cite_15" ], "mid": [ "1600471557" ], "abstract": [ "This paper studies the subspace segmentation problem which aims to segment data drawn from a union of multiple linear subspaces. Recent works by using sparse representation, low rank representation and their extensions attract much attention. If the subspaces from which the data drawn are independent or orthogonal, they are able to obtain a block diagonal affinity matrix, which usually leads to a correct segmentation. The main differences among them are their objective functions. We theoretically show that if the objective function satisfies some conditions, and the data are sufficiently drawn from independent subspaces, the obtained affinity matrix is always block diagonal. Furthermore, the data sampling can be insufficient if the subspaces are orthogonal. Some existing methods are all special cases. Then we present the Least Squares Regression (LSR) method for subspace segmentation. It takes advantage of data correlation, which is common in real data. LSR encourages a grouping effect which tends to group highly correlated data together. Experimental results on the Hopkins 155 database and Extended Yale Database B show that our method significantly outperforms state-of-the-art methods. Beyond segmentation accuracy, all experiments demonstrate that LSR is much more efficient." ] }
1904.12274
2901373116
Subspace clustering is a problem of exploring the low-dimensional subspaces of high-dimensional data. State-of-the-art approaches are designed by following the model of spectral clustering-based method. These methods pay much attention to learn the representation matrix to construct a suitable similarity matrix and overlook the influence of the noise term on subspace clustering. However, the real data are always contaminated by the noise and the noise usually has a complicated statistical distribution. To alleviate this problem, in this paper, we propose a subspace clustering method based on Cauchy loss function (CLF). Particularly, it uses CLF to penalize the noise term for suppressing the large noise mixed in the real data. This is due to that the CLF’s influence function has an upper bound that can alleviate the influence of a single sample, especially the sample with a large noise, on estimating the residuals. Furthermore, we theoretically prove the grouping effect of our proposed method, which means that highly correlated data can be grouped together. Finally, experimental results on five real data sets reveal that our proposed method outperforms several representative clustering methods.
In order to balance the sparsity and low rank property of the representation matrix, Correlation Adaptive Subspace Segmentation (CASS) @cite_47 was proposed to optimize the problem where @math is trace lasso and its definition can be found in @cite_47 . Due to taking the data correlation into account, it can adaptively interpolate SSC and LSR.
{ "cite_N": [ "@cite_47" ], "mid": [ "2160915541" ], "abstract": [ "This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using l1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, Low-Rank Representation (LRR), by rank minimization, and Least Squares Regression (LSR), by l2-regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS." ] }
1904.12274
2901373116
Subspace clustering is a problem of exploring the low-dimensional subspaces of high-dimensional data. State-of-the-art approaches are designed by following the model of spectral clustering-based method. These methods pay much attention to learn the representation matrix to construct a suitable similarity matrix and overlook the influence of the noise term on subspace clustering. However, the real data are always contaminated by the noise and the noise usually has a complicated statistical distribution. To alleviate this problem, in this paper, we propose a subspace clustering method based on Cauchy loss function (CLF). Particularly, it uses CLF to penalize the noise term for suppressing the large noise mixed in the real data. This is due to that the CLF’s influence function has an upper bound that can alleviate the influence of a single sample, especially the sample with a large noise, on estimating the residuals. Furthermore, we theoretically prove the grouping effect of our proposed method, which means that highly correlated data can be grouped together. Finally, experimental results on five real data sets reveal that our proposed method outperforms several representative clustering methods.
Mixture of Gaussian Regression (MoG Regression) @cite_40 , as a most related method to our work, uses the mixture of Gaussian model to describe the noise term and tries to solve the following problem where @math is the mixing weight, @math is mean vector, @math is the covariance matrix and @math denotes the number of Gaussian. Although MoG Regression has better performance than the single Gaussian model, it is only a extended version of single Gaussian and is sensitive to the number of Gaussian. Additionally, solving the above problem needs high computation cost.
{ "cite_N": [ "@cite_40" ], "mid": [ "1908089688" ], "abstract": [ "Subspace clustering is a problem of finding a multi-subspace representation that best fits sample points drawn from a high-dimensional space. The existing clustering models generally adopt different norms to describe noise, which is equivalent to assuming that the data are corrupted by specific types of noise. In practice, however, noise is much more complex. So it is inappropriate to simply use a certain norm to model noise. Therefore, we propose Mixture of Gaussian Regression (MoG Regression) for subspace clustering by modeling noise as a Mixture of Gaussians (MoG). The MoG Regression provides an effective way to model a much broader range of noise distributions. As a result, the obtained affinity matrix is better at characterizing the structure of data in real applications. Experimental results on multiple datasets demonstrate that MoG Regression significantly outperforms state-of-the-art subspace clustering methods." ] }
1904.12348
2940876386
In this paper, we propose a time-efficient approach to generate safe, smooth and dynamically feasible trajectories for quadrotors in obstacle-cluttered environment. By using the uniform B-spline to represent trajectories, we transform the trajectory planning to a graph-search problem of B-spline control points in discretized space. Highly strict convex hull property of B-spline is derived to guarantee the dynamical feasibility of the entire trajectory. A novel non-uniform kinodynamic search strategy is adopted, and the step length is dynamically adjusted during the search process according to the Euclidean signed distance field (ESDF), making the trajectory achieve reasonable time-allocation and be away from obstacles. Non-static initial and goal states are allowed, therefore it can be used for online local replanning as well as global planning. Extensive simulation and hardware experiments show that our method achieves higher performance comparing with the state-of-the-art method.
Trajectory generation for quadrotor can be transformed to generation of time-parameterized curve due to its differentially flat @cite_5 . When collision avoidance is considered in cluttered environments, curves such as B-splines @cite_6 @cite_11 , B ' e zier @cite_1 and piecewise polynomials @cite_13 are used to represent shape-constrained trajectories of quadrotors.
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_5", "@cite_13", "@cite_11" ], "mid": [ "2891491652", "2156852101", "2162991084", "2482392012", "2897423332" ], "abstract": [ "In this paper, we propose a framework for online quadrotor motion planning for autonomous navigation in unknown environments. Based on the onboard state estimation and environment perception, we adopt a fast marching-based path searching method to find a path on a velocity field induced by the Euclidean signed distance field (ESDF) of the map, to achieve better time allocation. We generate a flight corridor for the quadrotor to travel through by inflating the path against the environment. We represent the trajectory as piecewise Bezier curves by using Bernstein polynomial basis and formulate the trajectory generation problem as typical convex programs. By using Bezier curves, we are able to bound positions and higher order dynamics of the trajectory entirely within safe regions. The proposed motion planning method is integrated into a customized light-weight quadrotor platform and is validated by presenting fully autonomous navigation in unknown cluttered indoor and outdoor environments. We also release our code for trajectory generation as an open-source package.", "This paper presents a strategy for improving motion planning of an unmanned helicopter flying in a dense and complex city-like environment. Although Sampling Based Motion planning algorithms have shown success in many robotic problems, problems that exhibit ldquonarrow passagerdquo properties involving kinodynamic planning of high dimensional vehicles like aerial vehicles still present computational challenges. In this work, to solve the kinodynamic motion planning problem of an unmanned helicopter, we suggest a two step planner. In the first step, the planner explores the environment through a randomized reachability tree search using an approximate line segment model. The resulting connecting path is converted into flight way points through a line-of-sight segmentation. In the second step, every consecutive way points are connected with B-Spline curves and these curves are repaired probabilistically to obtain a dynamically feasible path. Numerical simulations in 3D indicate the ability of the method to provide real-time solutions in dense and complex environments.", "We address the controller design and the trajectory generation for a quadrotor maneuvering in three dimensions in a tightly constrained setting typical of indoor environments. In such settings, it is necessary to allow for significant excursions of the attitude from the hover state and small angle approximations cannot be justified for the roll and pitch. We develop an algorithm that enables the real-time generation of optimal trajectories through a sequence of 3-D positions and yaw angles, while ensuring safe passage through specified corridors and satisfying constraints on velocities, accelerations and inputs. A nonlinear controller ensures the faithful tracking of these trajectories. Experimental results illustrate the application of the method to fast motion (5–10 body lengths second) in three-dimensional slalom courses.", "We explore the challenges of planning trajectories for quadrotors through cluttered indoor environments. We extend the existing work on polynomial trajectory generation by presenting a method of jointly optimizing polynomial path segments in an unconstrained quadratic program that is numerically stable for high-order polynomials and large numbers of segments, and is easily formulated for efficient sparse computation. We also present a technique for automatically selecting the amount of time allocated to each segment, and hence the quadrotor speeds along the path, as a function of a single parameter determining aggressiveness, subject to actuator constraints. The use of polynomial trajectories, coupled with the differentially flat representation of the quadrotor, eliminates the need for computationally intensive sampling and simulation in the high dimensional state space of the vehicle during motion planning. Our approach generates high-quality trajecrtories much faster than purely sampling-based optimal kinodynamic planning methods, but sacrifices the guarantee of asymptotic convergence to the global optimum that those methods provide. We demonstrate the performance of our algorithm by efficiently generating trajectories through challenging indoor spaces and successfully traversing them at speeds up to 8 m s. A demonstration of our algorithm and flight performance is available at: http: groups.csail.mit.edu rrg quad_polynomial_trajectory_planning.", "In this paper, we introduce an algorithm for generating collision-free two-lane paths for unmanned vehicles used at mining sites by using quartic (degree 4) B-spline curves. Given the boundary geometry of the haul road area and the positions and orientations of the two-dimensional vehicle at the start and goal points, the algorithm automatically generates a collision-free two-lane path that satisfies the minimum turning radius constraint. Moreover, the resulting path shares the same third derivative at knots (joints), i.e., @math continuity, which guarantees a continuous rate of change of the curvature along the entire path. Examples are provided to demonstrate the effectiveness of the proposed algorithm." ] }
1904.12348
2940876386
In this paper, we propose a time-efficient approach to generate safe, smooth and dynamically feasible trajectories for quadrotors in obstacle-cluttered environment. By using the uniform B-spline to represent trajectories, we transform the trajectory planning to a graph-search problem of B-spline control points in discretized space. Highly strict convex hull property of B-spline is derived to guarantee the dynamical feasibility of the entire trajectory. A novel non-uniform kinodynamic search strategy is adopted, and the step length is dynamically adjusted during the search process according to the Euclidean signed distance field (ESDF), making the trajectory achieve reasonable time-allocation and be away from obstacles. Non-static initial and goal states are allowed, therefore it can be used for online local replanning as well as global planning. Extensive simulation and hardware experiments show that our method achieves higher performance comparing with the state-of-the-art method.
Many existing planning methods take a two-step pipeline, i.e. a collision-free path is planned at first, then the smoothness and time-allocation of the relevant trajectory are optimized based on the shape of the path. At the front-end, sampling-based @cite_16 and searching-based @cite_4 @cite_8 methods are used to plan a collision-free path. In the back-end, gradient-based methods @cite_15 and several other methods @cite_1 @cite_12 @cite_3 are employed to guarantee the smoothness and dynamical feasibility. However, these methods separates the trajectory shape and trajectory parameterization, are susceptible to some problems. For example, global optimal trajectory or even feasible trajectory may not be inside the homology class of path which is generated by the front-end methods without dynamic consideration.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_1", "@cite_3", "@cite_15", "@cite_16", "@cite_12" ], "mid": [ "2161076907", "2583422539", "2891491652", "2587415290", "2099893201", "1971086298", "2414314951" ], "abstract": [ "In real world planning problems, time for deliberation is often limited. Anytime planners are well suited for these problems: they find a feasible solution quickly and then continually work on improving it until time runs out. In this paper we propose an anytime heuristic search, ARA*, which tunes its performance bound based on available search time. It starts by finding a suboptimal solution quickly using a loose bound, then tightens the bound progressively as time allows. Given enough time it finds a provably optimal solution. While improving its bound, ARA* reuses previous search efforts and, as a result, is significantly more efficient than other anytime search methods. In addition to our theoretical analysis, we demonstrate the practical utility of ARA* with experiments on a simulated robot kinematic arm and a dynamic path planning problem for an outdoor rover.", "", "In this paper, we propose a framework for online quadrotor motion planning for autonomous navigation in unknown environments. Based on the onboard state estimation and environment perception, we adopt a fast marching-based path searching method to find a path on a velocity field induced by the Euclidean signed distance field (ESDF) of the map, to achieve better time allocation. We generate a flight corridor for the quadrotor to travel through by inflating the path against the environment. We represent the trajectory as piecewise Bezier curves by using Bernstein polynomial basis and formulate the trajectory generation problem as typical convex programs. By using Bezier curves, we are able to bound positions and higher order dynamics of the trajectory entirely within safe regions. The proposed motion planning method is integrated into a customized light-weight quadrotor platform and is validated by presenting fully autonomous navigation in unknown cluttered indoor and outdoor environments. We also release our code for trajectory generation as an open-source package.", "There is extensive literature on using convex optimization to derive piece-wise polynomial trajectories for controlling differential flat systems with applications to three-dimensional flight for Micro Aerial Vehicles. In this work, we propose a method to formulate trajectory generation as a quadratic program (QP) using the concept of a Safe Flight Corridor (SFC). The SFC is a collection of convex overlapping polyhedra that models free space and provides a connected path from the robot to the goal position. We derive an efficient convex decomposition method that builds the SFC from a piece-wise linear skeleton obtained using a fast graph search technique. The SFC provides a set of linear inequality constraints in the QP allowing real-time motion planning. Because the range and field of view of the robot's sensors are limited, we develop a framework of Receding Horizon Planning , which plans trajectories within a finite footprint in the local map, continuously updating the trajectory through a re-planning process. The re-planning process takes between 50 to 300 ms for a large and cluttered map. We show the feasibility of our approach, its completeness and performance, with applications to high-speed flight in both simulated and physical experiments using quadrotors.", "Existing high-dimensional motion planning algorithms are simultaneously overpowered and underpowered. In domains sparsely populated by obstacles, the heuristics used by sampling-based planners to navigate “narrow passages” can be needlessly complex; furthermore, additional post-processing is required to remove the jerky or extraneous motions from the paths that such planners generate. In this paper, we present CHOMP, a novel method for continuous path refinement that uses covariant gradient techniques to improve the quality of sampled trajectories. Our optimization technique both optimizes higher-order dynamics and is able to converge over a wider range of input paths relative to previous path optimization strategies. In particular, we relax the collision-free feasibility prerequisite on input paths required by those strategies. As a result, CHOMP can be used as a standalone motion planner in many real-world planning queries. We demonstrate the effectiveness of our proposed method in manipulation planning for a 6-DOF robotic arm as well as in trajectory generation for a walking quadruped robot.", "During the last decade, sampling-based path planning algorithms, such as probabilistic roadmaps (PRM) and rapidly exploring random trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g. as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g. showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically optimal, i.e. such that the cost of the returned solution converges almost surely to the optimum. Moreover, it is shown that the computational complexity of the new algorithms is within a constant factor of that of their probabilistically complete (but not asymptotically optimal) counterparts. The analysis in this paper hinges on novel connections between stochastic sampling-based path planning algorithms and the theory of random geometric graphs.", "We present an online method for generating collision-free trajectories for autonomous quadrotor flight through cluttered environments. We consider the real-world scenario that the quadrotor aerial robot is equipped with limited sensing and operates in initially unknown environments. During flight, an octree-based environment representation is incrementally built using onboard sensors. Utilizing efficient operations in the octree data structure, we are able to generate free-space flight corridors consisting of large overlapping 3-D grids in an online fashion. A novel optimization-based method then generates smooth trajectories that both are bounded entirely within the safe flight corridor and satisfy higher order dynamical constraints. Our method computes valid trajectories within fractions of a second on a moderately fast computer, thus permitting online re-generation of trajectories for reaction to new obstacles. We build a complete quadrotor testbed with onboard sensing, state estimation, mapping, and control, and integrate the proposed method to show online navigation through complex unknown environments." ] }
1904.12348
2940876386
In this paper, we propose a time-efficient approach to generate safe, smooth and dynamically feasible trajectories for quadrotors in obstacle-cluttered environment. By using the uniform B-spline to represent trajectories, we transform the trajectory planning to a graph-search problem of B-spline control points in discretized space. Highly strict convex hull property of B-spline is derived to guarantee the dynamical feasibility of the entire trajectory. A novel non-uniform kinodynamic search strategy is adopted, and the step length is dynamically adjusted during the search process according to the Euclidean signed distance field (ESDF), making the trajectory achieve reasonable time-allocation and be away from obstacles. Non-static initial and goal states are allowed, therefore it can be used for online local replanning as well as global planning. Extensive simulation and hardware experiments show that our method achieves higher performance comparing with the state-of-the-art method.
Some methods consider both the trajectory shape and the dynamics in parallel. @cite_2 used an optimization-based method that integrated the smoothness, dynamic constraints and obstacle-avoidance into the cost function of optimization, but faced a low success fraction. @cite_19 utilized a search-based approach that represented the trajectory with B-spline and performed dynamics check during the search process. @cite_9 proposed a primitive-based search method, the dynamical feasibility was guaranteed but met the problem of time-efficiency when high-order control inputs were required.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_2" ], "mid": [ "2890926967", "2963497136", "" ], "abstract": [ "We focus on a replanning scenario for quadrotors where considering time efficiency, non-static initial state and dynamical feasibility is of great significance. We propose a real-time B-spline based kinodynamic (RBK) search algorithm, which transforms a position-only shortest path search (such as A * and Dijkstra) into an efficient kinodynamic search, by exploring the properties of B-spline parameterization. The RBK search is greedy and produces a dynamically feasible time-parameterized trajectory efficiently, which facilitates non-static initial state of the quadrotor. To cope with the limitation of the greedy search and the discretization induced by a grid structure, we adopt an elastic optimization (EO) approach as a post-optimization process, to refine the control point placement provided by the RBK search. The EO approach finds the optimal control point placement inside an expanded elastic tube which represents the free space, by solving a Quadratically Constrained Quadratic Programming (QCQP) problem. We design a receding horizon replanner based on the local control property of B-spline. A systematic comparison of our method against two state-of-the-art methods is provided. We integrate our replanning system with a monocular vision-based quadrotor and validate our performance onboard.", "In this work, we propose a search-based planning method to compute dynamically feasible trajectories for a quadrotor flying in an obstacle-cluttered environment. Our approach searches for smooth, minimum-time trajectories by exploring the map using a set of short-duration motion primitives. The primitives are generated by solving an optimal control problem and induce a finite lattice discretization on the state space which can be explored using a graph-search algorithm. The proposed approach is able to generate resolution-complete (i.e., optimal in the discretized space), safe, dynamically feasibility trajectories efficiently by exploiting the explicit solution of a Linear Quadratic Minimum Time problem. It does not assume a hovering initial condition and, hence, is suitable for fast online re-planning while the robot is moving. Quadrotor navigation with online re-planning is demonstrated using the proposed approach in simulation and physical experiments and comparisons with trajectory generation based on state-of-art quadratic programming are presented.", "" ] }
1904.12348
2940876386
In this paper, we propose a time-efficient approach to generate safe, smooth and dynamically feasible trajectories for quadrotors in obstacle-cluttered environment. By using the uniform B-spline to represent trajectories, we transform the trajectory planning to a graph-search problem of B-spline control points in discretized space. Highly strict convex hull property of B-spline is derived to guarantee the dynamical feasibility of the entire trajectory. A novel non-uniform kinodynamic search strategy is adopted, and the step length is dynamically adjusted during the search process according to the Euclidean signed distance field (ESDF), making the trajectory achieve reasonable time-allocation and be away from obstacles. Non-static initial and goal states are allowed, therefore it can be used for online local replanning as well as global planning. Extensive simulation and hardware experiments show that our method achieves higher performance comparing with the state-of-the-art method.
As to the time-allocation of trajectory, which is another significant factor that affects the quality of trajectories. @cite_19 refined the control points of B-spline while total duration remains fixed. @cite_14 sampled the duration of segment until the related quadratic program is solved, but it may be not suitable to generate trajectories with many segments. Heuristic strategy may be reasonable and effective @cite_0 . For example, @cite_1 generated time-indexed path according to the density of obstacle, but the final trajectories do not strictly follow the heuristics after optimization. In this paper, we simultaneously deal with kinodynamic search and time-allocation of the trajectory, which shows noticeable benefits.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_14", "@cite_1" ], "mid": [ "2030005314", "2890926967", "2789286310", "2891491652" ], "abstract": [ "", "We focus on a replanning scenario for quadrotors where considering time efficiency, non-static initial state and dynamical feasibility is of great significance. We propose a real-time B-spline based kinodynamic (RBK) search algorithm, which transforms a position-only shortest path search (such as A * and Dijkstra) into an efficient kinodynamic search, by exploring the properties of B-spline parameterization. The RBK search is greedy and produces a dynamically feasible time-parameterized trajectory efficiently, which facilitates non-static initial state of the quadrotor. To cope with the limitation of the greedy search and the discretization induced by a grid structure, we adopt an elastic optimization (EO) approach as a post-optimization process, to refine the control point placement provided by the RBK search. The EO approach finds the optimal control point placement inside an expanded elastic tube which represents the free space, by solving a Quadratically Constrained Quadratic Programming (QCQP) problem. We design a receding horizon replanner based on the local control property of B-spline. A systematic comparison of our method against two state-of-the-art methods is provided. We integrate our replanning system with a monocular vision-based quadrotor and validate our performance onboard.", "We tackle the transition feasibility problem, that is the issue of determining whether there exists a feasible motion connecting two configurations of a legged robot. To achieve this we introduce CROC, a novel method for computing centroidal dynamics trajectories in multi-contact planning contexts. Our approach is based on a conservative and convex reformulation of the problem, where we represent the center of mass trajectory as a Bezier curve comprising a single free control point as a variable. Under this formulation, the transition problem is solved efficiently with a Linear Program (LP)of low dimension. We use this LP as a feasibility criterion, incorporated in a sampling-based contact planner, to discard efficiently unfeasible contact plans. We are thus able to produce robust contact sequences, likely to define feasible motion synthesis problems. We illustrate this application on various multi-contact scenarios featuring HRP2 and HyQ. We also show that we can use CROC to compute valuable initial guesses, used to warm-start non-linear solvers for motion generation methods. This method could also be used for the 0 and 1-Step capturability problem. The source code of CROC is available under an open source BSD-2 License.", "In this paper, we propose a framework for online quadrotor motion planning for autonomous navigation in unknown environments. Based on the onboard state estimation and environment perception, we adopt a fast marching-based path searching method to find a path on a velocity field induced by the Euclidean signed distance field (ESDF) of the map, to achieve better time allocation. We generate a flight corridor for the quadrotor to travel through by inflating the path against the environment. We represent the trajectory as piecewise Bezier curves by using Bernstein polynomial basis and formulate the trajectory generation problem as typical convex programs. By using Bezier curves, we are able to bound positions and higher order dynamics of the trajectory entirely within safe regions. The proposed motion planning method is integrated into a customized light-weight quadrotor platform and is validated by presenting fully autonomous navigation in unknown cluttered indoor and outdoor environments. We also release our code for trajectory generation as an open-source package." ] }
1904.12605
2950001907
Recommender systems are becoming more and more important in our daily lives. However, traditional recommendation methods are challenged by data sparsity and efficiency, as the numbers of users, items, and interactions between the two in many real-world applications increase fast. In this work, we propose a novel clustering recommender system based on node2vec technology and rich information network, namely N2VSCDNNR, to solve these challenges. In particular, we use a bipartite network to construct the user-item network, and represent the interactions among users (or items) by the corresponding one-mode projection network. In order to alleviate the data sparsity problem, we enrich the network structure according to user and item categories, and construct the one-mode projection category network. Then, considering the data sparsity problem in the network, we employ node2vec to capture the complex latent relationships among users (or items) from the corresponding one-mode projection category network. Moreover, considering the dependency on parameter settings and information loss problem in clustering methods, we use a novel spectral clustering method, which is based on dynamic nearest-neighbors (DNN) and a novel automatically determining cluster number (ADCN) method that determines the cluster centers based on the normal distribution method, to cluster the users and items separately. After clustering, we propose the two-phase personalized recommendation to realize the personalized recommendation of items for each user. A series of experiments validate the outstanding performance of our N2VSCDNNR over several advanced embedding and side information based recommendation algorithms. Meanwhile, N2VSCDNNR seems to have lower time complexity than the baseline methods in online recommendations, indicating its potential to be widely applied in large-scale systems.
Most of clustering-based recommendation methods compute the similarity based on rating data, and then employ some basic clustering algorithm, such as K-means method, to generate the users (items) groups. @cite_42 proposed the bisecting K-means method clustering algorithm to divide the users into multiple clusters. In this method, the nearest-neighbors of a target user is selected based on the partition that the user belongs to. Puntheeranurak and Tsuji @cite_24 proposed a hybrid recommender system, where they clustered users by adopting a fuzzy K-means clustering algorithm. The recommendation results for both original and clustered data are combined to improve the traditional collborative filtering (CF) algorithms. Rana proposed a dynamic recommender system (DRS) to cluster users via an evolutionary algorithm. Wang clustered users by using K-means algorithm and then estimated the absent rating in the user-item matrix to predict the preference of a target user. @cite_39 paid more attention to discover the implicit similarity among users and items, where the authors first clustered user (or item) latent factor vectors into user (or item) cluster-level factor vectors. After that, they compressed the original approximation into a cluster-level rating-pattern based on the cluster-level factor vectors.
{ "cite_N": [ "@cite_24", "@cite_42", "@cite_39" ], "mid": [ "2133790168", "", "1228575997" ], "abstract": [ "Recommender systems have become an important research area because they have been a kind of Web intelligence techniques to search through the enormous volume of information available on the Internet. Collaborative filtering and content-based methods are two most commonly used approaches in most recommender systems. Although each of them has both advantages and disadvantages in providing high quality recommendations, a hybrid recommendation mechanism incorporating components from both of the methods would yield satisfactory results in many situations. In this paper, we present an elegant and effective framework for combining content and collaboration. Our approach uses a content-based predictor to enhance existing user data and item data, and then provides personalized suggestions through user-based collaborative filtering and item-based collaborative filtering. The proposed system clusters on content-based approach and collaborative approach then it contribute to the improvement of prediction quality of a hybrid recommender system.", "", "Matrix approximation is a common model-based approach to collaborative filtering in recommender systems. Many relevant algorithms that fuse social contextual information have been proposed. They mainly focus on using the latent factor vectors of users and items to construct a low-rank approximation of the user-item rating matrix. However, due to data sparsity, it is difficult for current approaches to accurately learn every user item vector, which may cause low-quality recommendations for some users or items. But we implicitly detect some similar users or items based on the distance among the vectors. In this paper, we take advantage of the implicit similarity to improve matrix approximation. Borrowing parts of ideas from CodeBook Transfer, we propose a reconstructive method that compresses low-rank approximation into a cluster-level rating-pattern referred to as a codebook, and then constructs an improved approximation by expending the codebook. Experiments on real life datasets demonstrate our method improves the prediction accuracy of the state-of-the-art matrix factorization and social recommendation models." ] }
1904.12605
2950001907
Recommender systems are becoming more and more important in our daily lives. However, traditional recommendation methods are challenged by data sparsity and efficiency, as the numbers of users, items, and interactions between the two in many real-world applications increase fast. In this work, we propose a novel clustering recommender system based on node2vec technology and rich information network, namely N2VSCDNNR, to solve these challenges. In particular, we use a bipartite network to construct the user-item network, and represent the interactions among users (or items) by the corresponding one-mode projection network. In order to alleviate the data sparsity problem, we enrich the network structure according to user and item categories, and construct the one-mode projection category network. Then, considering the data sparsity problem in the network, we employ node2vec to capture the complex latent relationships among users (or items) from the corresponding one-mode projection category network. Moreover, considering the dependency on parameter settings and information loss problem in clustering methods, we use a novel spectral clustering method, which is based on dynamic nearest-neighbors (DNN) and a novel automatically determining cluster number (ADCN) method that determines the cluster centers based on the normal distribution method, to cluster the users and items separately. After clustering, we propose the two-phase personalized recommendation to realize the personalized recommendation of items for each user. A series of experiments validate the outstanding performance of our N2VSCDNNR over several advanced embedding and side information based recommendation algorithms. Meanwhile, N2VSCDNNR seems to have lower time complexity than the baseline methods in online recommendations, indicating its potential to be widely applied in large-scale systems.
In network science, an important question is how to properly represent the network information. Network representation learning, used to learn low-dimensional representations for nodes or links in the network, is capable to benefit a wide range of real-world applications, such as recommender system @cite_1 @cite_18 @cite_33 @cite_22 @cite_23 @cite_14 @cite_16 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_33", "@cite_22", "@cite_1", "@cite_23", "@cite_16" ], "mid": [ "2745413481", "2736151280", "2743104969", "2798557793", "2799012401", "2735656899", "2887853521" ], "abstract": [ "Knowledge Graphs have proven to be extremely valuable to recommender systems, as they enable hybrid graph-based recommendation models encompassing both collaborative and content information. Leveraging this wealth of heterogeneous information for top-N item recommendation is a challenging task, as it requires the ability of effectively encoding a diversity of semantic relations and connectivity patterns. In this work, we propose entity2rec, a novel approach to learning user-item relatedness from knowledge graphs for top-N item recommendation. We start from a knowledge graph modeling user-item and item-item relations and we learn property-specific vector representations of users and items applying neural language models on the network. These representations are used to create property-specific user-item relatedness features, which are in turn fed into learning to rank algorithms to learn a global relatedness model that optimizes top-N item recommendations. We evaluate the proposed approach in terms of ranking quality on the MovieLens 1M dataset, outperforming a number of state-of-the-art recommender systems, and we assess the importance of property-specific relatedness scores on the overall ranking quality.", "Traditional music recommendation techniques suffer from limited performance due to the sparsity of user-music interaction data, which is addressed by incorporating auxiliary information. In this paper, we study the problem of personalized music recommendation that takes different kinds of auxiliary information into consideration. To achieve this goal, a Heterogeneous Information Graph (HIG) is first constructed to encode different kinds of heterogeneous information, including the interactions between users and music pieces, music playing sequences, and the metadata of music pieces. Based on HIG, a Heterogeneous Information Graph Embedding method (HIGE) is proposed to learn the latent low-dimensional representations of music pieces. Then, we further develop a context-aware music recommendation method. Extensive experiments have been conducted on real-world datasets to compare the proposed method with other state-of-the-art recommendation methods. The results demonstrate that the proposed method significantly outperforms those baselines, especially on sparse datasets.", "We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.", "With the advent of online social networks, the use of information hidden in social networks for recommendation has been extensively studied. Unlike previous work regarded social influence as regularization terms, we take advantage of network embedding techniques and propose an embedding based recommendation method. Specifically, we first pre-train a network embedding model on the users' social network to map each user into a low dimensional space, and then incorporate them into a matrix factorization model, which combines both latent and pre-learned features for recommendation. The experimental results on two real-world datasets indicate that our proposed model is more effective and can reach better performance than other related methods.", "This work develops a representation learning method for bipartite networks. While existing works have developed various embedding methods for network data, they have primarily focused on homogeneous networks in general and overlooked the special properties of bipartite networks. As such, these methods can be suboptimal for embedding bipartite networks. In this paper, we propose a new method named BiNE, short for Bipartite Network Embedding, to learn the vertex representations for bipartite networks. By performing biased random walks purposefully, we generate vertex sequences that can well preserve the long-tail distribution of vertices in the original bipartite network. We then propose a novel optimization framework by accounting for both the explicit relations (i.e., observed links) and implicit relations (i.e., unobserved but transitive links) in learning the vertex representations. We conduct extensive experiments on several real datasets covering the tasks of link prediction (classification), recommendation (personalized ranking), and visualization. Both quantitative results and qualitative analysis verify the effectiveness and rationality of our BiNE method.", "This paper presents a novel, graph embedding based recommendation technique. The method operates on the knowledge graph, an information representation technique alloying content-based and collaborative information. To generate recommendations, a two dimensional embedding is developed for the knowledge graph. As the embedding maps the users and the items to the same vector space, the recommendations are then calculated on a spatial basis. Regarding to the number of cold start cases, precision, recall, normalized Cumulative Discounted Gain and computational resource need, the evaluation shows that the introduced technique delivers a higher performance compared to collaborative filtering on top-n recommendation lists. Our further finding is that graph embedding based methods show a more stable performance in the case of an increasing amount of user preference information compared to the benchmark method.", "In the past years, knowledge graphs have proven to be beneficial for recommender systems, efficiently addressing paramount issues such as new items and data sparsity. Graph embeddings algorithms have shown to be able to automatically learn high quality feature vectors from graph structures, enabling vector-based measures of node relatedness. In this paper, we show how node2vec can be used to generate item recommendations by learning knowledge graph embeddings. We apply node2vec on a knowledge graph built from the MovieLens 1M dataset and DBpedia and use the node relatedness to generate item recommendations. The results show that node2vec consistently outperforms a set of collaborative filtering baselines on an array of relevant metrics." ] }
1904.12605
2950001907
Recommender systems are becoming more and more important in our daily lives. However, traditional recommendation methods are challenged by data sparsity and efficiency, as the numbers of users, items, and interactions between the two in many real-world applications increase fast. In this work, we propose a novel clustering recommender system based on node2vec technology and rich information network, namely N2VSCDNNR, to solve these challenges. In particular, we use a bipartite network to construct the user-item network, and represent the interactions among users (or items) by the corresponding one-mode projection network. In order to alleviate the data sparsity problem, we enrich the network structure according to user and item categories, and construct the one-mode projection category network. Then, considering the data sparsity problem in the network, we employ node2vec to capture the complex latent relationships among users (or items) from the corresponding one-mode projection category network. Moreover, considering the dependency on parameter settings and information loss problem in clustering methods, we use a novel spectral clustering method, which is based on dynamic nearest-neighbors (DNN) and a novel automatically determining cluster number (ADCN) method that determines the cluster centers based on the normal distribution method, to cluster the users and items separately. After clustering, we propose the two-phase personalized recommendation to realize the personalized recommendation of items for each user. A series of experiments validate the outstanding performance of our N2VSCDNNR over several advanced embedding and side information based recommendation algorithms. Meanwhile, N2VSCDNNR seems to have lower time complexity than the baseline methods in online recommendations, indicating its potential to be widely applied in large-scale systems.
Recently, the DeepWalk algorithm @cite_3 was proposed to transform each node in a network to a vector automatically, which takes full advantage of the information of the random walk sequence in the network. Another network representation learning algorithm based on simple neural network is the LINE algorithm @cite_10 which can be applied to large-scale directed weighted networks. Moreover, Grover and Leskovec @cite_17 suggested that increasing the flexibility in searching for adjacent nodes is the key to enhance network feature learning. They thus proposed the node2vec algorithm, which learns low-dimensional representations for nodes by optimizing an objective of neighborhood preserving. It designs a flexible neighborhood sampling and a flexible biased random walk procedure that can explore neighborhoods through breadth-first sampling (BFS) @cite_43 or depth-first sampling (DFS) @cite_11 . It defines a second-order random walk with two parameters guiding the walk. One controls how fast the walk explores and the other controls how fast it leaves the neighborhood of the starting node. These two parameters allow our search to interpolate between BFS and DFS and thereby reflect an affinity for different notions of node equivalences.
{ "cite_N": [ "@cite_17", "@cite_3", "@cite_43", "@cite_10", "@cite_11" ], "mid": [ "2962756421", "2154851992", "2060616833", "1888005072", "2057685268" ], "abstract": [ "Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.", "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "Networks provide a powerful way to study complex systems of interacting objects. Detecting network communities—groups of objects that often correspond to functional modules—is crucial to understanding social, technological, and biological systems. Revealing communities allows for analysis of system properties that are invisible when considering only individual objects or the entire system, such as the identification of module boundaries and relationships or the classification of objects according to their functional roles. However, in networks where objects can simultaneously belong to multiple modules at once, the decomposition of a network into overlapping communities remains a challenge. Here we present a new paradigm for uncovering the modular structure of complex networks, based on a decomposition of a network into any combination of overlapping, nonoverlapping, and hierarchically organized communities. We demonstrate on a diverse set of networks coming from a wide range of domains that our approach leads to more accurate communities and improved identification of community boundaries. We also unify two fundamental organizing principles of complex networks: the modularity of communities and the commonly observed core–periphery structure. We show that dense network cores form as an intersection of many overlapping communities. We discover that communities in social, information, and food web networks have a single central dominant core while communities in protein–protein interaction (PPI) as well as product copurchasing networks have small overlaps and form many local cores.", "This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online https: github.com tangjianpku LINE .", "Given a network, intuitively two nodes belong to the same role if they have similar structural behavior. Roles should be automatically determined from the data, and could be, for example, \"clique-members,\" \"periphery-nodes,\" etc. Roles enable numerous novel and useful network-mining tasks, such as sense-making, searching for similar nodes, and node classification. This paper addresses the question: Given a graph, how can we automatically discover roles for nodes? We propose RolX (Role eXtraction), a scalable (linear in the number of edges), unsupervised learning approach for automatically extracting structural roles from general network data. We demonstrate the effectiveness of RolX on several network-mining tasks: from exploratory data analysis to network transfer learning. Moreover, we compare network role discovery with network community discovery. We highlight fundamental differences between the two (e.g., roles generalize across disconnected networks, communities do not); and show that the two approaches are complimentary in nature." ] }
1904.12605
2950001907
Recommender systems are becoming more and more important in our daily lives. However, traditional recommendation methods are challenged by data sparsity and efficiency, as the numbers of users, items, and interactions between the two in many real-world applications increase fast. In this work, we propose a novel clustering recommender system based on node2vec technology and rich information network, namely N2VSCDNNR, to solve these challenges. In particular, we use a bipartite network to construct the user-item network, and represent the interactions among users (or items) by the corresponding one-mode projection network. In order to alleviate the data sparsity problem, we enrich the network structure according to user and item categories, and construct the one-mode projection category network. Then, considering the data sparsity problem in the network, we employ node2vec to capture the complex latent relationships among users (or items) from the corresponding one-mode projection category network. Moreover, considering the dependency on parameter settings and information loss problem in clustering methods, we use a novel spectral clustering method, which is based on dynamic nearest-neighbors (DNN) and a novel automatically determining cluster number (ADCN) method that determines the cluster centers based on the normal distribution method, to cluster the users and items separately. After clustering, we propose the two-phase personalized recommendation to realize the personalized recommendation of items for each user. A series of experiments validate the outstanding performance of our N2VSCDNNR over several advanced embedding and side information based recommendation algorithms. Meanwhile, N2VSCDNNR seems to have lower time complexity than the baseline methods in online recommendations, indicating its potential to be widely applied in large-scale systems.
Then, with the development of network embedding, a number of embedding-based recommender systems were proposed in recently years. For instance, @cite_18 proposed an entity2rec algorithm to learning user-item relatedness from knowledge graphs, so as to realize item recommendation. Kiss and Filzmoser @cite_23 proposed a method to map the users and items to the same two-dimensional embedding space to make the recommendations. Swami @cite_33 introduced a heterogeneous representation learning model, called Metapath2vec++, which uses meta-path-based random walks to construct the heterogeneous neighbors of a node and then leverages a heterogeneous skip-gram model to perform node embeddings, and further make recommendations based on the network representation. @cite_1 proposed a network embedding method for bipartite networks, namely BiNE. It generates node sequences that can well preserve the long-tail distribution of nodes in the bipartite networks by performing biased random walks purposefully. The authors make recommendations with the generated network representation. @cite_22 proposed an embedding based recommendation method. In this model, they use a network embedding method to map each user into a low dimensional space at first, and then incorporate user vectors into a matrix factorization model for recommendation.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_22", "@cite_1", "@cite_23" ], "mid": [ "2745413481", "2743104969", "2798557793", "2799012401", "2735656899" ], "abstract": [ "Knowledge Graphs have proven to be extremely valuable to recommender systems, as they enable hybrid graph-based recommendation models encompassing both collaborative and content information. Leveraging this wealth of heterogeneous information for top-N item recommendation is a challenging task, as it requires the ability of effectively encoding a diversity of semantic relations and connectivity patterns. In this work, we propose entity2rec, a novel approach to learning user-item relatedness from knowledge graphs for top-N item recommendation. We start from a knowledge graph modeling user-item and item-item relations and we learn property-specific vector representations of users and items applying neural language models on the network. These representations are used to create property-specific user-item relatedness features, which are in turn fed into learning to rank algorithms to learn a global relatedness model that optimizes top-N item recommendations. We evaluate the proposed approach in terms of ranking quality on the MovieLens 1M dataset, outperforming a number of state-of-the-art recommender systems, and we assess the importance of property-specific relatedness scores on the overall ranking quality.", "We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.", "With the advent of online social networks, the use of information hidden in social networks for recommendation has been extensively studied. Unlike previous work regarded social influence as regularization terms, we take advantage of network embedding techniques and propose an embedding based recommendation method. Specifically, we first pre-train a network embedding model on the users' social network to map each user into a low dimensional space, and then incorporate them into a matrix factorization model, which combines both latent and pre-learned features for recommendation. The experimental results on two real-world datasets indicate that our proposed model is more effective and can reach better performance than other related methods.", "This work develops a representation learning method for bipartite networks. While existing works have developed various embedding methods for network data, they have primarily focused on homogeneous networks in general and overlooked the special properties of bipartite networks. As such, these methods can be suboptimal for embedding bipartite networks. In this paper, we propose a new method named BiNE, short for Bipartite Network Embedding, to learn the vertex representations for bipartite networks. By performing biased random walks purposefully, we generate vertex sequences that can well preserve the long-tail distribution of vertices in the original bipartite network. We then propose a novel optimization framework by accounting for both the explicit relations (i.e., observed links) and implicit relations (i.e., unobserved but transitive links) in learning the vertex representations. We conduct extensive experiments on several real datasets covering the tasks of link prediction (classification), recommendation (personalized ranking), and visualization. Both quantitative results and qualitative analysis verify the effectiveness and rationality of our BiNE method.", "This paper presents a novel, graph embedding based recommendation technique. The method operates on the knowledge graph, an information representation technique alloying content-based and collaborative information. To generate recommendations, a two dimensional embedding is developed for the knowledge graph. As the embedding maps the users and the items to the same vector space, the recommendations are then calculated on a spatial basis. Regarding to the number of cold start cases, precision, recall, normalized Cumulative Discounted Gain and computational resource need, the evaluation shows that the introduced technique delivers a higher performance compared to collaborative filtering on top-n recommendation lists. Our further finding is that graph embedding based methods show a more stable performance in the case of an increasing amount of user preference information compared to the benchmark method." ] }
1904.12268
2940906447
Vygotsky's notions of Zone of Proximal Development and Dynamic Assessment emphasize the importance of personalized learning that adapts to the needs and abilities of the learners and enables more efficient learning. In this work we introduce a novel adaptive learning engine called E-gostky that builds on these concepts to personalize the learning path within an e-learning system. E-gostky uses machine learning techniques to select the next content item that will challenge the student but will not be overwhelming, keeping students in their Zone of Proximal Development. To evaluate the system, we conducted an experiment where hundreds of students from several different elementary schools used our engine to learn fractions for five months. Our results show that using E-gostky can significantly reduce the time required to reach similar mastery. Specifically, in our experiment, it took students who were using the adaptive learning engine @math less time to reach a similar level of mastery as of those who didn't. Moreover, students made greater efforts to find the correct answer rather than guessing and class teachers reported that even students with learning disabilities showed higher engagement.
The Zone of Proximal Development (ZPD) plays a key role in the method we propose. The ZPD was introduced by Vygotsky as a part of a general analysis of the development of children. Vygotsky proposed a model of child development where each period is characterized by a set of relations among psychological functions (such as perception, speech, thinking), and the transition from one age period to another depends on a qualitative change in that set of relations @cite_33 . In this model, learning is a function not only of the child's qualities but also of her relationships with the environment.
{ "cite_N": [ "@cite_33" ], "mid": [ "2288663400" ], "abstract": [ "What kind of instruction is optimal for a particular child? Without doubt, this question is immediately comprehensible to any committed teacher in virtually any country in the world, and most teachers are likely to want concrete answers to the question, not only as a theoretical puzzle, but in relation to their immediate practices. If one were to look to scientific psychology and educational research for advice in relation to this practical problem, what would the answer(s) look like? This simple question raises several profound problems. Normative and political issues about the goals of instruction and the resources available for realizing these goals must be resolved. A theory of learning that can explain how intellectual capabilities are developed is needed. If instruction is not viewed as an end in itself, then a theory about the relationship between specific subject matter instruction and its consequences for psychological development is also needed. This last problem was the main tension against which Vygotsky developed his well-known concept of zone of proximal development , so that the zone focused on the relation between instruction and development, while being relevant to many of these other problems. Vygotsky's concept of zone of proximal development is more precise and elaborated than its common reception or interpretation. The main purpose of this chapter is to provide a comprehensive introduction to and interpretation of this concept, along with comments about predominant contemporary interpretations. The chapter concludes with some perspectives and implications derived from the interpretation presented here." ] }
1904.12268
2940906447
Vygotsky's notions of Zone of Proximal Development and Dynamic Assessment emphasize the importance of personalized learning that adapts to the needs and abilities of the learners and enables more efficient learning. In this work we introduce a novel adaptive learning engine called E-gostky that builds on these concepts to personalize the learning path within an e-learning system. E-gostky uses machine learning techniques to select the next content item that will challenge the student but will not be overwhelming, keeping students in their Zone of Proximal Development. To evaluate the system, we conducted an experiment where hundreds of students from several different elementary schools used our engine to learn fractions for five months. Our results show that using E-gostky can significantly reduce the time required to reach similar mastery. Specifically, in our experiment, it took students who were using the adaptive learning engine @math less time to reach a similar level of mastery as of those who didn't. Moreover, students made greater efforts to find the correct answer rather than guessing and class teachers reported that even students with learning disabilities showed higher engagement.
The ZPD in Vygotsky's theory is used to identify which psychological functions (and related social interactions) are needed for transitioning to the next age period, and to assess the current status of the child's maturing functions @cite_33 . Since performance in these psychological functions depends on social interactions, the assessment procedures should have dynamic, interactive nature. Hence, the ZPD provides a framework to evaluate learner's abilities, which is known as Dynamic Assessment (DA) @cite_33 @cite_13 . While in traditional approaches to assessment the main concern is current existing skills, DA is focused on measuring the learning potential for future development @cite_24 . A dynamic test is based on teacher assistance, where guidance, feedback and adaptive delivery of assistance are embedded in the evaluation procedure itself @cite_33 . Rather than information about past functioning and existing skills, dynamic approaches tend to be equally interested in estimating the learner's cognitive and meta-cognitive strategies, their responsiveness to assistance, and their ability to transfer skills that were learned with assistance to subsequent unassisted situations @cite_28 .
{ "cite_N": [ "@cite_24", "@cite_28", "@cite_13", "@cite_33" ], "mid": [ "2163308793", "2117928866", "", "2288663400" ], "abstract": [ "This article reports the efforts of an elementary school teacher of Spanish as a second language to implement principles of dynamic assessment (DA) in her daily interactions with learners. DA is neither an assessment instrument nor a method of assessing but a framework for conceptualizing teaching and assessment as an integrated activity of understanding learner abilities by actively supporting their development (Poehner, 2008). DA is based on Vygotsky’s (1987) proposal of the zone of proximal development (ZPD), which underscores the developmental importance of providing appropriate support to learners to help them stretch beyond their independent performance. The particular approach to DA that the teacher followed reflected her interpretation of the ZPD as well as her knowledge of her instructional context and was arrived at through consultation with the present authors. In other words, her use of DA represents a unification of theory and practice, as advocated by Vygotsky, whereby theory offers a basis to guide practice but at the same time practice functions to refine and extend theory. Examples of the teacher’s interactions with learners in her classroom are discussed with regard to the opportunities for development they create.", "This article discusses the application of dynamic assessment with gifted students. We define dynamic assessment in terms of its inclusion of intervention and feedback during the course of assessment, as well as the focus of the model on the students' use of cognitive and metacognitive strategies and responsiveness to the interventions provided during the course of the assessment. Dynamic approaches vary on a continuum of flexible clinical applications to more rigorous psychometric approaches. They also differ in relation to the nature and timing of the interventions, as well as the content domains tapped. Finally, we review the history of the use of dynamic assessment with gifted students. This use demonstrates the utility of this approach for the purpose of determination of eligibility for gifted programs, which has been particularly relevant for students from culturally diverse backgrounds. Dynamic assessment has been usefully applied for classification purposes in the cases of both students with mental...", "", "What kind of instruction is optimal for a particular child? Without doubt, this question is immediately comprehensible to any committed teacher in virtually any country in the world, and most teachers are likely to want concrete answers to the question, not only as a theoretical puzzle, but in relation to their immediate practices. If one were to look to scientific psychology and educational research for advice in relation to this practical problem, what would the answer(s) look like? This simple question raises several profound problems. Normative and political issues about the goals of instruction and the resources available for realizing these goals must be resolved. A theory of learning that can explain how intellectual capabilities are developed is needed. If instruction is not viewed as an end in itself, then a theory about the relationship between specific subject matter instruction and its consequences for psychological development is also needed. This last problem was the main tension against which Vygotsky developed his well-known concept of zone of proximal development , so that the zone focused on the relation between instruction and development, while being relevant to many of these other problems. Vygotsky's concept of zone of proximal development is more precise and elaborated than its common reception or interpretation. The main purpose of this chapter is to provide a comprehensive introduction to and interpretation of this concept, along with comments about predominant contemporary interpretations. The chapter concludes with some perspectives and implications derived from the interpretation presented here." ] }
1904.12268
2940906447
Vygotsky's notions of Zone of Proximal Development and Dynamic Assessment emphasize the importance of personalized learning that adapts to the needs and abilities of the learners and enables more efficient learning. In this work we introduce a novel adaptive learning engine called E-gostky that builds on these concepts to personalize the learning path within an e-learning system. E-gostky uses machine learning techniques to select the next content item that will challenge the student but will not be overwhelming, keeping students in their Zone of Proximal Development. To evaluate the system, we conducted an experiment where hundreds of students from several different elementary schools used our engine to learn fractions for five months. Our results show that using E-gostky can significantly reduce the time required to reach similar mastery. Specifically, in our experiment, it took students who were using the adaptive learning engine @math less time to reach a similar level of mastery as of those who didn't. Moreover, students made greater efforts to find the correct answer rather than guessing and class teachers reported that even students with learning disabilities showed higher engagement.
The problem of sequencing educational content attracted many researchers, especially in the educational data mining community @cite_3 . For example, Pardon and Heffernan @cite_34 inferred order over exercises presented to students by predicting their skill levels using Bayesian Knowledge Tracing (BKT) @cite_23 . They showed the efficacy of their approach on simulated data as well as on a test set comprising random sequences of three exercises. Ben- @cite_19 proposed a BKT based sequencing algorithm that uses knowledge tracing to model students' skill acquisition over time and sequences exercises based on their mastery level and predicted performance. The setting of this work is similar to the setting we study here, however, they use different method for sequencing which does not take the ZPD into account. @cite_10 @cite_7 introduced several sequencing algorithms. The first, EduRank, an algorithm that combines collaborative filtering with social choice theory to produce personalized learning sequences for students. The latter, MAPLE, an algorithm that combines difficulty ranking with Multi Armed Bandits. @cite_26 used experts' knowledge to bootstrap a Multi-Armed Bandit approach with models that rely on empirical estimation of the learning progress.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_3", "@cite_19", "@cite_23", "@cite_34", "@cite_10" ], "mid": [ "2951121747", "", "", "2339576062", "2015040676", "2129373702", "2337351188" ], "abstract": [ "We present an approach to Intelligent Tutoring Systems which adaptively personalizes sequences of learning activities to maximize skills acquired by students, taking into account the limited time and motivational resources. At a given point in time, the system proposes to the students the activity which makes them progress faster. We introduce two algorithms that rely on the empirical estimation of the learning progress, RiARiT that uses information about the difficulty of each exercise and ZPDES that uses much less knowledge about the problem. The system is based on the combination of three approaches. First, it leverages recent models of intrinsically motivated learning by transposing them to active teaching, relying on empirical estimation of learning progress provided by specific activities to particular students. Second, it uses state-of-the-art Multi-Arm Bandit (MAB) techniques to efficiently manage the exploration exploitation challenge of this optimization process. Third, it leverages expert knowledge to constrain and bootstrap initial exploration of the MAB, while requiring only coarse guidance information of the expert and allowing the system to deal with didactic gaps in its knowledge. The system is evaluated in a scenario where 7-8 year old schoolchildren learn how to decompose numbers while manipulating money. Systematic experiments are presented with simulated students, followed by results of a user study across a population of 400 school children.", "", "", "Despite the prevalence of e-learning systems in schools, most of today's systems do not personalize educational data to the individual needs of each student. This paper proposes a new algorithm for sequencing questions to students that is empirically shown to lead to better performance and engagement in real schools when compared to a baseline approach. It is based on using knowledge tracing to model students' skill acquisition over time, and to select questions that advance the student's learning within the range of the student's capabilities, as determined by the model. The algorithm is based on a Bayesian Knowledge Tracing (BKT) model that incorporates partial credit scores, reasoning about multiple attempts to solve problems, and integrating item difficulty. This model is shown to outperform other BKT models that do not reason about (or reason about some but not all) of these features. The model was incorporated into a sequencing algorithm and deployed in two classes in different schools where it was compared to a baseline sequencing algorithm that was designed by pedagogical experts. In both classes, students using the BKT sequencing approach solved more difficult questions and attributed higher performance than did students who used the expert-based approach. Students were also more engaged using the BKT approach, as determined by their interaction time and number of log-ins to the system, as well as their reported opinion. We expect our approach to inform the design of better methods for sequencing and personalizing educational content to students that will meet their individual learning needs.", "This paper describes an effort to model students' changing knowledge state during skill acquisition. Students in this research are learning to write short programs with the ACT Programming Tutor (APT). APT is constructed around a production rule cognitive model of programming knowledge, called theideal student model. This model allows the tutor to solve exercises along with the student and provide assistance as necessary. As the student works, the tutor also maintains an estimate of the probability that the student has learned each of the rules in the ideal model, in a process calledknowledge tracing. The tutor presents an individualized sequence of exercises to the student based on these probability estimates until the student has ‘mastered’ each rule. The programming tutor, cognitive model and learning and performance assumptions are described. A series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process. Currently the model is quite successful in predicting test performance. Further modifications in the modeling process are discussed that may improve performance levels.", "Researchers who make tutoring systems would like to know which sequences of educational content lead to the most effective learning by their students. The majority of data collected in many ITS systems consist of answers to a group of questions of a given skill often presented in a random sequence. Following work that identifies which items produce the most learning we propose a Bayesian method using similar permutation analysis techniques to determine if item learning is context sensitive and if so which orderings of questions produce the most learning. We confine our analysis to random sequences with three questions. The method identifies question ordering rules such as, question A should go before B, which are statistically reliably beneficial to learning. Real tutor data from five random sequence problem sets were analyzed. Statistically reliable orderings of questions were found in two of the five real data problem sets. A simulation consisting of 140 experiments was run to validate the method's accuracy and test its reliability. The method succeeded in finding 43 of the underlying item order effects with a 6 false positive rate using a p value threshold of <= 0.05. Using this method, ITS researchers can gain valuable knowledge about their problem sets and feasibly let the ITS automatically identify item order effects and optimize student learning by restricting assigned sequences to those prescribed as most beneficial to learning.", "The growing prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities, backgrounds and styles. There is thus a growing need to accomodate for individual differences in such e-learning systems. This paper presents a new algorithm for personliazing educational content to students that combines collaborative filtering algorithms with social choice theory. The algorithm constructs a “difficulty” ranking over questions for a target student by aggregating the ranking of similar students, as measured by different aspects of their performance on common past questions, such as grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for a target student, rather than ordering them according to predicted performance, which is prone to error. The algorithm was tested on two large real world data sets containing tens of thousands of students and a million records. Its performance was compared to a variety of personalization methods as well as a non-personalized method that relied on a domain expert. It was able to significantly outperform all of these approaches according to standard information retrieval metrics. Our approach can potentially be used to support teachers in tailoring problem sets and exams to individual students and students in informing them about areas they may need to strengthen." ] }
1904.11739
2940898985
The task of recognizing goals and plans from missing and full observations can be done efficiently by using automated planning techniques. In many applications, it is important to recognize goals and plans not only accurately, but also quickly. To address this challenge, we develop novel goal recognition approaches based on planning techniques that rely on planning landmarks. In automated planning, landmarks are properties (or actions) that cannot be avoided to achieve a goal. We show the applicability of a number of planning techniques with an emphasis on landmarks for goal and plan recognition tasks in two settings: (1) we use the concept of landmarks to develop goal recognition heuristics; and (2) we develop a landmark-based filtering method to refine existing planning-based goal and plan recognition approaches. These recognition approaches are empirically evaluated in experiments over several classical planning domains. We show that our goal recognition approaches yield not only accuracy comparable to (and often higher than) other state-of-the-art techniques, but also substantially faster recognition time over such techniques.
Finally, @cite_11 recently proposed an approach to perform probabilistic plan recognition along the lines of @cite_26 , that, instead of running a full-fledged planner for each goal, takes advantage of multiple goal heuristic search @cite_30 to search for for all goals simultaneously and avoid repeatedly expanding the same nodes. Their approach has not been implemented and evaluated yet and it aims to overcome the limitation of our technique to only be able to account for progress towards goals when we have evidence of landmarks being achieved, while retaining the speed gains we achieve. While we do not have empirical evidence about its accuracy and efficiency, we believe this is an exciting direction for goal recognition, and we expect it to approach and overcome the accuracy of @cite_26 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_11" ], "mid": [ "2113207783", "1572384134", "2885968630" ], "abstract": [ "This paper presents a new framework for anytime heuristic search where the task is to achieve as many goals as possible within the allocated resources. We show the inadequacy of traditional distance-estimation heuristics for tasks of this type and present alternative heuristics that are more appropriate for multiple-goal search. In particular, we introduce the marginal-utility heuristic, which estimates the cost and the benefit of exploring a subtree below a search node. We developed two methods for online learning of the marginal-utility heuristic. One is based on local similarity of the partial marginal utility of sibling nodes, and the other generalizes marginal-utility over the state feature space. We apply our adaptive and non-adaptive multiple-goal search algorithms to several problems, including focused crawling, and show their superiority over existing methods.", "Plan recognition is the problem of inferring the goals and plans of an agent after observing its behavior. Recently, it has been shown that this problem can be solved efficiently, without the need of a plan library, using slightly modified planning algorithms. In this work, we extend this approach to the more general problem of probabilistic plan recognition where a probability distribution over the set of goals is sought under the assumptions that actions have deterministic effects and both agent and observer have complete information about the initial state. We show that this problem can be solved efficiently using classical planners provided that the probability of a partially observed execution given a goal is defined in terms of the cost difference of achieving the goal under two conditions: complying with the observations, and not complying with them. This cost, and hence the posterior goal probabilities, are computed by means of two calls to a classical planner that no longer has to be modified in any way. A number of examples is considered to illustrate the quality, flexibility, and scalability of the approach.", "" ] }
1904.11864
2940641446
We propose a joint model of human joint detection and association for 2D multi-person pose estimation (MPPE). The approach unifies training of joint detection and association without a need for further processing or sophisticated heuristics in order to associate the joints with people individually. The approach consists of two stages, where in the first stage joint detection heatmaps and association features are extracted, and in the second stage, whose input are the extracted features of the first stage, we introduce a recurrent neural network (RNN) which predicts the heatmaps of a single person's joints in each iteration. In addition, the network learns a stopping criterion in order to halt once it has identified all individuals in the image. This approach allowed us to eliminate several heuristic assumptions and parameters needed for association which do not necessarily hold true. Additionally, such an end-to-end approach allows the final objective to be known and directly optimized over during training. We evaluated our model on the challenging MSCOCO dataset and obtained an improvement over the baseline, particularly in challenging scenes with occlusions.
Another problem that is related to multi-person pose estimation is the task of instance segmentation. Romera-Paredes al @cite_20 propose using a recurrent neural network (RNN) for instance segmentation, but without inferring the label of an instance. Salvador al @cite_15 eliminate this shortcoming by introducing an additional branch that predicts the class of each instance. However, they do not localize the exact joint locations or address the distinct case of people instances which have unique articulation features that can facilitate segmenting instances in complex scenes containing close interactions.
{ "cite_N": [ "@cite_15", "@cite_20" ], "mid": [ "2772283977", "2963659353" ], "abstract": [ "We present a recurrent model for semantic instance segmentation that sequentially generates binary masks and their associated class probabilities for every object in an image. Our proposed system is trainable end-to-end from an input image to a sequence of labeled masks and, compared to methods relying on object proposals, does not require post-processing steps on its output. We study the suitability of our recurrent model on three different instance segmentation benchmarks, namely Pascal VOC 2012, CVPPP Plant Leaf Segmentation and Cityscapes. Further, we analyze the object sorting patterns generated by our model and observe that it learns to follow a consistent pattern, which correlates with the activations learned in the encoder part of our network. Source code and models are available at this https URL", "Instance segmentation is the problem of detecting and delineating each distinct object of interest appearing in an image. Current instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here we propose a new instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. This net is provided with a spatial memory that keeps track of what pixels have been explained and allows occlusion handling. In order to train the model we designed a principled loss function that accurately represents the properties of the instance segmentation problem. In the experiments carried out, we found that our method outperforms recent approaches on multiple person segmentation, and all state of the art approaches on the Plant Phenotyping dataset for leaf counting." ] }
1904.11578
2941454668
Event-based camera is a bio-inspired vision sensor that records intensity changes (called event) asynchronously in each pixel. As an instance of event-based camera, Dynamic and Active-pixel Vision Sensor (DAVIS) combines a standard camera and an event-based camera. However, traditional models could not deal with the event stream asynchronously. To analyze the event stream asynchronously, most existing approaches accumulate events within a certain time interval and treat the accumulated events as a synchronous frame, which wastes the intensity change information and weakens the advantages of DAVIS. Therefore, in this paper, we present the first neural asynchronous approach to process event stream for event-based camera. Our method asynchronously extracts dynamic information from events by leveraging previous motion and critical features of gray-scale frames. To our best knowledge, this is the first neural asynchronous method to analyze event stream through a novel deep neural network. Extensive experiments demonstrate that our proposed model achieves remarkable improvements against the state-of-the-art baselines.
Recently, event-based cameras have gained extraordinary improvements against standard cameras in many fields. However, the main challenge for event-based cameras is how to leverage the event sequence. Traditional models do not provide the toolbox to handle the event sequence precisely, because they apply the simple solution that to accumulate events in a certain time interval @math , to make the analysis process similar to synchronous image frames, @cite_10 @cite_14 . To collect sufficient events during the interval time, most approaches would apply a large time interval (e.g. @math ). Inspired by @cite_2 , several researchers attempt to split the synchronous frames into two parts that positive events (brightness increase) and negative events (brightness decrease) respectively.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_2" ], "mid": [ "2963580221", "1539361405", "2469278928" ], "abstract": [ "Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset ( 1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras.", "We propose an algorithm to estimate the “lifetime” of events from retinal cameras, such as a Dynamic Vision Sensor (DVS). Unlike standard CMOS cameras, a DVS only transmits pixel-level brightness changes (“events”) at the time they occur with micro-second resolution. Due to its low latency and sparse output, this sensor is very promising for high-speed mobile robotic applications. We develop an algorithm that augments each event with its lifetime, which is computed from the event's velocity on the image plane. The generated stream of augmented events gives a continuous representation of events in time, hence enabling the design of new algorithms that outperform those based on the accumulation of events over fixed, artificially-chosen time intervals. A direct application of this augmented stream is the construction of sharp gradient (edge-like) images at any time instant. We successfully demonstrate our method in different scenarios, including high-speed quadrotor flips, and compare it to standard visualization methods.", "This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy." ] }
1904.11578
2941454668
Event-based camera is a bio-inspired vision sensor that records intensity changes (called event) asynchronously in each pixel. As an instance of event-based camera, Dynamic and Active-pixel Vision Sensor (DAVIS) combines a standard camera and an event-based camera. However, traditional models could not deal with the event stream asynchronously. To analyze the event stream asynchronously, most existing approaches accumulate events within a certain time interval and treat the accumulated events as a synchronous frame, which wastes the intensity change information and weakens the advantages of DAVIS. Therefore, in this paper, we present the first neural asynchronous approach to process event stream for event-based camera. Our method asynchronously extracts dynamic information from events by leveraging previous motion and critical features of gray-scale frames. To our best knowledge, this is the first neural asynchronous method to analyze event stream through a novel deep neural network. Extensive experiments demonstrate that our proposed model achieves remarkable improvements against the state-of-the-art baselines.
@cite_20 presents a 4-channel neural architecture to address optical flow estimation issues, the first two channels of which encode the number of positive and negative events and the last two channels of which encode the timestamp of the most recent positive and negative events at that pixel.
{ "cite_N": [ "@cite_20" ], "mid": [ "2788172931" ], "abstract": [ "Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand crafted algorithms. Deep learning has shown great success in providing model free solutions to many problems in the vision community, but existing networks have been developed with frame based images in mind, and there does not exist the wealth of labeled data for events as there does for images for supervised training. To these points, we present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras. In particular, we introduce an image based representation of a given event stream, which is fed into a self-supervised neural network as the sole input. The corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time, given the estimated flow from the network. We show that the resulting network is able to accurately predict optical flow from events only in a variety of different scenes, with performance competitive to image based networks. This method not only allows for accurate estimation of dense optical flow, but also provides a framework for the transfer of other self-supervised methods to the event-based domain." ] }
1904.11578
2941454668
Event-based camera is a bio-inspired vision sensor that records intensity changes (called event) asynchronously in each pixel. As an instance of event-based camera, Dynamic and Active-pixel Vision Sensor (DAVIS) combines a standard camera and an event-based camera. However, traditional models could not deal with the event stream asynchronously. To analyze the event stream asynchronously, most existing approaches accumulate events within a certain time interval and treat the accumulated events as a synchronous frame, which wastes the intensity change information and weakens the advantages of DAVIS. Therefore, in this paper, we present the first neural asynchronous approach to process event stream for event-based camera. Our method asynchronously extracts dynamic information from events by leveraging previous motion and critical features of gray-scale frames. To our best knowledge, this is the first neural asynchronous method to analyze event stream through a novel deep neural network. Extensive experiments demonstrate that our proposed model achieves remarkable improvements against the state-of-the-art baselines.
Current state-of-the-art model @cite_14 splits the synchronous frames into separate histograms for positive and negative events as two channels, which are processed into feature vectors and then feed the feature vector into ResNet to achieve the target of self-driving that steering angle.
{ "cite_N": [ "@cite_14" ], "mid": [ "2963580221" ], "abstract": [ "Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset ( 1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras." ] }
1904.11578
2941454668
Event-based camera is a bio-inspired vision sensor that records intensity changes (called event) asynchronously in each pixel. As an instance of event-based camera, Dynamic and Active-pixel Vision Sensor (DAVIS) combines a standard camera and an event-based camera. However, traditional models could not deal with the event stream asynchronously. To analyze the event stream asynchronously, most existing approaches accumulate events within a certain time interval and treat the accumulated events as a synchronous frame, which wastes the intensity change information and weakens the advantages of DAVIS. Therefore, in this paper, we present the first neural asynchronous approach to process event stream for event-based camera. Our method asynchronously extracts dynamic information from events by leveraging previous motion and critical features of gray-scale frames. To our best knowledge, this is the first neural asynchronous method to analyze event stream through a novel deep neural network. Extensive experiments demonstrate that our proposed model achieves remarkable improvements against the state-of-the-art baselines.
Moreover, synchronous methods increase the latency of events cameras, which is against the low-latency property of event-based cameras. Therefore, according to the DAVIS characteristics, @cite_6 presents an asynchronous approach that combines the events and gray-scale images provided by the DAVIS sensor to track features from the geometric model, which leverages each occurring event. Moreover, this asynchronous model beats the state-of-the-art synchronous models in feature tracking task. And there is previous work that also focuses on tracking features by event-based camera @cite_15 @cite_12 . And recently extensions of popular image-based key point detectors have been developed for event-based cameras @cite_17 @cite_5
{ "cite_N": [ "@cite_6", "@cite_5", "@cite_15", "@cite_12", "@cite_17" ], "mid": [ "2969508737", "2563042993", "", "2739136981", "2769144726" ], "abstract": [ "We present EKLT, a feature tracking method that leverages the complementarity of event cameras and standard cameras to track visual features with high temporal resolution. Event cameras are novel sensors that output pixel-level brightness changes, called “events”. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the same scene pattern can produce different events depending on the motion direction, establishing event correspondences across time is challenging. By contrast, standard cameras provide intensity measurements (frames) that do not depend on motion direction. Our method extracts features on frames and subsequently tracks them asynchronously using events, thereby exploiting the best of both types of data: the frames provide a photometric representation that does not depend on motion direction and the events provide updates with high temporal resolution. In contrast to previous works, which are based on heuristics, this is the first principled method that uses intensity measurements directly, based on a generative event model within a maximum-likelihood framework. As a result, our method produces feature tracks that are more accurate than the state of the art, across a wide variety of scenes.", "The detection of consistent feature points in an image is fundamental for various kinds of computer vision techniques, such as stereo matching, object recognition, target tracking and optical flow computation. This paper presents an event-based approach to the detection of corner points, which benefits from the high temporal resolution, compressed visual information and low latency provided by an asynchronous neuromorphic event-based camera. The proposed method adapts the commonly used Harris corner detector to the event-based data, in which frames are replaced by a stream of asynchronous events produced in response to local light changes at μs temporal resolution. Responding only to changes in its field of view, an event-based camera naturally enhances edges in the scene, simplifying the detection of corner features. We characterised and tested the method on both a controlled pattern and a real scenario, using the dynamic vision sensor (DVS) on the neuromorphic iCub robot. The method detects corners with a typical error distribution within 2 pixels. The error is constant for different motion velocities and directions, indicating a consistent detection across the scene and over time. We achieve a detection rate proportional to speed, higher than frame-based technique for a significant amount of motion in the scene, while also reducing the computational cost.", "", "Asynchronous event-based sensors present new challenges in basic robot vision problems like feature tracking. The few existing approaches rely on grouping events into models and computing optical flow after assigning future events to those models. Such a hard commitment in data association attenuates the optical flow quality and causes shorter flow tracks. In this paper, we introduce a novel soft data association modeled with probabilities. The association probabilities are computed in an intertwined EM scheme with the optical flow computation that maximizes the expectation (marginalization) over all associations. In addition, to enable longer tracks we compute the affine deformation with respect to the initial point and use the resulting residual as a measure of persistence. The computed optical flow enables a varying temporal integration different for every feature and sized inversely proportional to the length of the flow. We show results in egomotion and very fast vehicle sequences and we show the superiority over standard frame-based cameras.", "Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixel- level brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, state- of-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the lat- est event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of pro- cessing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20." ] }
1904.11578
2941454668
Event-based camera is a bio-inspired vision sensor that records intensity changes (called event) asynchronously in each pixel. As an instance of event-based camera, Dynamic and Active-pixel Vision Sensor (DAVIS) combines a standard camera and an event-based camera. However, traditional models could not deal with the event stream asynchronously. To analyze the event stream asynchronously, most existing approaches accumulate events within a certain time interval and treat the accumulated events as a synchronous frame, which wastes the intensity change information and weakens the advantages of DAVIS. Therefore, in this paper, we present the first neural asynchronous approach to process event stream for event-based camera. Our method asynchronously extracts dynamic information from events by leveraging previous motion and critical features of gray-scale frames. To our best knowledge, this is the first neural asynchronous method to analyze event stream through a novel deep neural network. Extensive experiments demonstrate that our proposed model achieves remarkable improvements against the state-of-the-art baselines.
Furthermore, @cite_11 develops a real-time gesture recognition system with a novel chip of the name TrueNorth. And the ability of event-based cameras to provide rich data for solving pattern recognition problems has been initially shown in @cite_16 @cite_8 @cite_0
{ "cite_N": [ "@cite_0", "@cite_8", "@cite_16", "@cite_11" ], "mid": [ "2757430550", "2020096355", "1963689209", "2745933219" ], "abstract": [ "This demonstration presents a convolutional neural network (CNN) playing “RoShamBo” (“rock-paper-scissors”) against human opponents in real time. The network is driven by dynamic and active-pixel vision sensor (DAVIS) events, acquired by accumulating events into fixed event-number frames.", "This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5 @math 3.5 ) for a previously published four class card pip recognition task and an accuracy of 84.9 @math 1.9 for a new more difficult 36 class character recognition task.", "Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given \"frame rate.\" Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or \"temporal contrast.\" The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to \"reality.\" These events can be processed \"as they flow\" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules.", "We present the first gesture recognition system implemented end-to-end on event-based hardware, using a TrueNorth neurosynaptic processor to recognize hand gestures in real-time at low power from events streamed live by a Dynamic Vision Sensor (DVS). The biologically inspired DVS transmits data only when a pixel detects a change, unlike traditional frame-based cameras which sample every pixel at a fixed frame rate. This sparse, asynchronous data representation lets event-based cameras operate at much lower power than frame-based cameras. However, much of the energy efficiency is lost if, as in previous work, the event stream is interpreted by conventional synchronous processors. Here, for the first time, we process a live DVS event stream using TrueNorth, a natively event-based processor with 1 million spiking neurons. Configured here as a convolutional neural network (CNN), the TrueNorth chip identifies the onset of a gesture with a latency of 105 ms while consuming less than 200 mW. The CNN achieves 96.5 out-of-sample accuracy on a newly collected DVS dataset (DvsGesture) comprising 11 hand gesture categories from 29 subjects under 3 illumination conditions." ] }
1904.11578
2941454668
Event-based camera is a bio-inspired vision sensor that records intensity changes (called event) asynchronously in each pixel. As an instance of event-based camera, Dynamic and Active-pixel Vision Sensor (DAVIS) combines a standard camera and an event-based camera. However, traditional models could not deal with the event stream asynchronously. To analyze the event stream asynchronously, most existing approaches accumulate events within a certain time interval and treat the accumulated events as a synchronous frame, which wastes the intensity change information and weakens the advantages of DAVIS. Therefore, in this paper, we present the first neural asynchronous approach to process event stream for event-based camera. Our method asynchronously extracts dynamic information from events by leveraging previous motion and critical features of gray-scale frames. To our best knowledge, this is the first neural asynchronous method to analyze event stream through a novel deep neural network. Extensive experiments demonstrate that our proposed model achieves remarkable improvements against the state-of-the-art baselines.
Regarding our novelty, this paper presents the first deep learning driven framework to analyze the events stream asynchronously. Moreover, compared to synchronous redundant event training frames, we leverage each event to extract dynamic information, which is ignored by the synchronous approaches. Furthermore, we apply the channel-wise and spatial-wise attention mechanism @cite_4 for our model and verify the effectiveness of attention methods.
{ "cite_N": [ "@cite_4" ], "mid": [ "2550553598" ], "abstract": [ "Visual attention has been successfully applied in structural prediction tasks such as visual captioning and question answering. Existing visual attention models are generally spatial, i.e., the attention is modeled as spatial probabilities that re-weight the last conv-layer feature map of a CNN encoding an input image. However, we argue that such spatial attention does not necessarily conform to the attention mechanism &#x2014; a dynamic feature extractor that combines contextual fixations over time, as CNN features are naturally spatial, channel-wise and multi-layer. In this paper, we introduce a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN. In the task of image captioning, SCA-CNN dynamically modulates the sentence generation context in multi-layer feature maps, encoding where (i.e., attentive spatial locations at multiple layers) and what (i.e., attentive channels) the visual attention is. We evaluate the proposed SCA-CNN architecture on three benchmark image captioning datasets: Flickr8K, Flickr30K, and MSCOCO. It is consistently observed that SCA-CNN significantly outperforms state-of-the-art visual attention-based image captioning methods." ] }
1904.11681
2949606546
We investigate online convex optimization in changing environments, and choose the adaptive regret as the performance measure. The goal is to achieve a small regret over every interval so that the comparator is allowed to change over time. Different from previous works that only utilize the convexity condition, this paper further exploits smoothness to improve the adaptive regret. To this end, we develop novel adaptive algorithms for convex and smooth functions, and establish problem-dependent regret bounds over any interval. Our regret bounds are comparable to existing results in the worst case, and become much tighter when the comparator has a small loss.
For exponentially concave (abbr. exp-concave) functions, use online Newton step @cite_1 as the expert-algorithm. For the construction of intervals, they consider two different approaches. In the first approach, the set of intervals is @math which means an expert will be initialized at each round @math and live forever. In the second approach, the set of intervals is @math , meaning the expert that becomes active in round @math will be removed after @math . Here, @math denotes the ending time of the interval started from @math , and its value is set according to a data streaming algorithm. develop a meta-algorithm based on Fixed-Share @cite_2 , and allow the set of experts to change dynamically.
{ "cite_N": [ "@cite_1", "@cite_2" ], "mid": [ "2129160848", "1970041563" ], "abstract": [ "In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover's Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret @math , for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight. In this paper, we give algorithms that achieve regret O(log?(T)) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth (EuroCOLT 1999), and Universal Portfolios by Cover (Math. Finance 1:1---19, 1991). We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement. The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection between the natural follow-the-leader approach and the Newton method. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover's algorithm and gradient descent.", "We generalize the recent relative loss bounds for on-line algorithms where the additional loss of the algorithm on the whole sequence of examples over the loss of the best expert is bounded. The generalization allows the sequence to be partitioned into segments, and the goal is to bound the additional loss of the algorithm over the sum of the losses of the best experts for each segment. This is to model situations in which the examples change and different experts are best for certain segments of the sequence of examples. In the single segment case, the additional loss is proportional to log n, where n is the number of experts and the constant of proportionality depends on the loss function. Our algorithms do not produce the best partition; however the loss bound shows that our predictions are close to those of the best partition. When the number of segments is k+1 and the sequence is of length e, we can bound the additional loss of our algorithm over the best partition by O(k n+k (e k)). For the case when the loss per trial is bounded by one, we obtain an algorithm whose additional loss over the loss of the best partition is independent of the length of the sequence. The additional loss becomes O(k n+ k (L k)), where L is the loss of the best partitionwith k+1 segments. Our algorithms for tracking the predictions of the best expert aresimple adaptations of Vovk's original algorithm for the single best expert case. As in the original algorithms, we keep one weight per expert, and spend O(1) time per weight in each trial." ] }
1904.11753
2942347321
As one type of machine-learning model, a "decision-tree ensemble model" (DTEM) is represented by a set of decision trees. A DTEM is mainly known to be valid for structured data; however, like other machine-learning models, it is difficult to train so that it returns the correct output value for any input value. Accordingly, when a DTEM is used in regard to a system that requires reliability, it is important to comprehensively detect input values that lead to malfunctions of a system (failures) during development and take appropriate measures. One conceivable solution is to install an input filter that controls the input to the DTEM, and to use separate software to process input values that may lead to failures. To develop the input filter, it is necessary to specify the filtering condition of the input value that leads to the malfunction of the system. Given that necessity, in this paper, we propose a method for formally verifying a DTEM and, according to the result of the verification, if an input value leading to a failure is found, extracting the range in which such an input value exists. The proposed method can comprehensively extract the range in which the input value leading to the failure exists; therefore, by creating an input filter based on that range, it is possible to prevent the failure occurring in the system. In this paper, the algorithm of the proposed method is described, and the results of a case study using a dataset of house prices are presented. On the basis of those results, the feasibility of the proposed method is demonstrated, and its scalability is evaluated.
As described in Section , proposed a method to develop a policing function for autonomous systems @cite_13 . The policing function checks output values of intelligent function such as a machine learning model at runtime. The policing function is useful to prevent failures as well as the input filter. However, the policing function works after the intelligent function executes. It means that the policing function detects and controls the possible failure later than the input filter. Therefore, the proposed method for creating the input filter is more useful in develepment of systems which require quick handling of failures.
{ "cite_N": [ "@cite_13" ], "mid": [ "2735893299" ], "abstract": [ "We present an approach for ensuring safety properties of autonomous systems. Our contribution is a system architecture where a policing function validating system safety properties at runtime is separated from the system's intelligent planning function. The policing function is developed formally by a correct-by-construction method. The separation of concerns enables the possibility of replacing and adapting the intelligent planning function without changing the validation approach. We validate our approach on the example of a multi-UAV system managing route generation. Our prototype runtime validator has been integrated and evaluated with an industrial UAV synthetic environment." ] }
1904.11610
2942152398
We examine a large dialog corpus obtained from the conversation history of a single individual with 104 conversation partners. The corpus consists of half a million instant messages, across several messaging platforms. We focus our analyses on seven speaker attributes, each of which partitions the set of speakers, namely: gender; relative age; family member; romantic partner; classmate; co-worker; and native to the same country. In addition to the content of the messages, we examine conversational aspects such as the time messages are sent, messaging frequency, psycholinguistic word categories, linguistic mirroring, and graph-based features reflecting how people in the corpus mention each other. We present two sets of experiments predicting each attribute using (1) short context windows; and (2) a larger set of messages. We find that using all features leads to gains of 9-14 over using message text only.
On authorship attribution, there have been several studies focusing on inferring author's characteristics from their writing, including their gender, age, educational, cultural background, and native language @cite_3 @cite_1 . This work has considered linguistic features to capture lexical, syntactic, structural, and style differences between individuals @cite_1 . A recent study in this area analyzed language use in social media to identify aspects such as gender, age, and personality by looking at group differences on language usage in words, phrases, and topics discussed by Facebook users @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_3" ], "mid": [ "2119595472", "1987380777", "2025882237" ], "abstract": [ "We analyzed 700 million words, phrases, and topic instances collected from the Facebook messages of 75,000 volunteers, who also took standard personality tests, and found striking variations in language with personality, gender, and age. In our open-vocabulary technique, the data itself drives a comprehensive exploration of language that distinguishes people, finding connections that are not captured with traditional closed-vocabulary word-category analyses. Our analyses shed new light on psychosocial processes yielding results that are face valid (e.g., subjects living in high elevations talk about the mountains), tie in with other research (e.g., neurotic people disproportionately use the phrase ‘sick of’ and the word ‘depressed’), suggest new hypotheses (e.g., an active life implies emotional stability), and give detailed insights (males use the possessive ‘my’ when mentioning their ‘wife’ or ‘girlfriend’ more often than females use ‘my’ with ‘husband’ or 'boyfriend’). To date, this represents the largest study, by an order of magnitude, of language and personality.", "Statistical authorship attribution has a long history, culminating in the use of modern machine learning classification methods. Nevertheless, most of this work suffers from the limitation of assuming a small closed set of candidate authors and essentially unlimited training text for each. Real-life authorship attribution problems, however, typically fall short of this ideal. Thus, following detailed discussion of previous work, three scenarios are considered here for which solutions to the basic attribution problem are inadequate. In the first variant, the profiling problem, there is no candidate set at all; in this case, the challenge is to provide as much demographic or psychological information as possible about the author. In the second variant, the needle-in-a-haystack problem, there are many thousands of candidates for each of whom we might have a very limited writing sample. In the third variant, the verification problem, there is no closed candidate set but there is one suspect; in this case, the challenge is to determine if the suspect is or is not the author. For each variant, it is shown how machine learning methods can be adapted to handle the special challenges of that variant. © 2009 Wiley Periodicals, Inc.", "We present a method for authorship discrimination that is based on the frequency of bigrams of syntactic labels that arise from partial parsing of the text. We show that this method, alone or combined with other classification features, achieves a high accuracy on discrimination of the work of Anne and Charlotte Bronte ¨, which is very difficult to do by traditional methods. Moreover, high accuracies are achieved even on fragments of text little more than 200 words long." ] }
1904.11610
2942152398
We examine a large dialog corpus obtained from the conversation history of a single individual with 104 conversation partners. The corpus consists of half a million instant messages, across several messaging platforms. We focus our analyses on seven speaker attributes, each of which partitions the set of speakers, namely: gender; relative age; family member; romantic partner; classmate; co-worker; and native to the same country. In addition to the content of the messages, we examine conversational aspects such as the time messages are sent, messaging frequency, psycholinguistic word categories, linguistic mirroring, and graph-based features reflecting how people in the corpus mention each other. We present two sets of experiments predicting each attribute using (1) short context windows; and (2) a larger set of messages. We find that using all features leads to gains of 9-14 over using message text only.
Discourse analysis approaches have been used to examine language to reveal social behavior patterns. Holmer @cite_0 applied discourse structure analysis to chat communication to identify and visualize message content and interaction structures. He focused on visualizing aspects such as conversation complexity, overlapping turns, distance between messages, turn changes, patterns in message production and references. In addition, he also proposed graph-based methods for showing coherence and thread patterns during the messaging interaction. Tuulos @cite_11 inferred social structures in chat-room conversations, using heuristics based on participants' references, message response time and dialog sequences and represented social structure using graph-based methods. Similarly, Jing @cite_4 looked at extracting networks of biographical facts from speech transcripts that characterize the relationships between people and organizations.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_11" ], "mid": [ "2124158915", "2137887958", "1971817345" ], "abstract": [ "This article presents a research method called Discourse Structure Analysis (DSA and a software application called ChatLine that supports the analysis of chat transcripts according to DSA. The DSA method is based on manual referencing and automatic analysis of chat transcripts in order to create visualizations and measures of their message and interaction structures. The goal of DSA is to provide a comprehensive and extensible method for the data‑driven analysis of chat logs that can support both qualitative and quantitative investigations of computer‑mediated communication.", "We present a general framework for automatically extracting social networks and biographical facts from conversational speech. Our approach relies on fusing the output produced by multiple information extraction modules, including entity recognition and detection, relation detection, and event detection modules. We describe the specific features and algorithmic refinements effective for conversational speech. These cumulatively increase the performance of social network extraction from 0.06 to 0.30 for the development set, and from 0.06 to 0.28 for the test set, as measured by f-measure on the ties within a network. The same framework can be applied to other genres of text — we have built an automatic biography generation system for general domain text using the same approach.", "Informal chat-room conversations have intrinsically different properties from regular static document collections. Noise, concise expressions and dynamic, changing and interleaving nature of discussions make chat data ill-suited for analysis with an off-the-shelf text mining method. On the other hand, interactive human communication has some implicit features which may be used to enhance the results. In our research we infer social network structures from the chat data by using a few basic heuristics. We then present some preliminary results showing that the inferred social graph may be used to enhance topic identification of a chat room when combined with a state-of-the-art topic and classification models. For validation purposes we then compare the performance effects of using this social information in a topic classification task." ] }
1904.11610
2942152398
We examine a large dialog corpus obtained from the conversation history of a single individual with 104 conversation partners. The corpus consists of half a million instant messages, across several messaging platforms. We focus our analyses on seven speaker attributes, each of which partitions the set of speakers, namely: gender; relative age; family member; romantic partner; classmate; co-worker; and native to the same country. In addition to the content of the messages, we examine conversational aspects such as the time messages are sent, messaging frequency, psycholinguistic word categories, linguistic mirroring, and graph-based features reflecting how people in the corpus mention each other. We present two sets of experiments predicting each attribute using (1) short context windows; and (2) a larger set of messages. We find that using all features leads to gains of 9-14 over using message text only.
Work in classifying user attributes has used both message content and other meta-features. Rao @cite_12 looked at classifying gender, age (older or younger than 30), political leaning, and region of origin (north or south India) as binary variables using a few hundred or a few thousand tweets from each user. They used the number of followers and following users as network information to look at frequency of tweets, replies, and retweets as communication-based features but found no differences between classes. Hutto @cite_2 analyzed sentiment, topic focus, and network structure in tweeting behavior to understand aspects such as social behavior, message content and following behavior. Other work has derived useful information from Twitter profiles, such as Bergsma @cite_6 who focused on gender classification using features derived from usernames, and Argamon @cite_10 who found differences in part of speech and style when examining gender in the British National Corpus.
{ "cite_N": [ "@cite_10", "@cite_6", "@cite_12", "@cite_2" ], "mid": [ "2063205814", "", "2017729405", "2132969050" ], "abstract": [ "This article explores differences between male and female writing in a large subset of the British National Corpus covering a range of genres. Several classes of simple lexical and syntactic features that differ substantially according to author gender are identified, both in fiction and in nonfiction documents. In particular, we find significant differences between male- and female-authored documents in the use of pronouns and certain types of noun modifiers: although the total number of nominals used by male and female authors is virtually identical, females use many more pronouns and males use many more noun specifiers. More generally, it is found that even in formal writing, female writing exhibits greater usage of features identified by previous researchers as 'involved' while male writing exhibits greater usage of features which have been identified as 'informational'. Finally, a strong correlation between the characteristics of male (female) writing and those of nonfiction (fiction) is demonstrated.", "", "Social media outlets such as Twitter have become an important forum for peer interaction. Thus the ability to classify latent user attributes, including gender, age, regional origin, and political orientation solely from Twitter user language or similar highly informal content has important applications in advertising, personalization, and recommendation. This paper includes a novel investigation of stacked-SVM-based classification algorithms over a rich set of original features, applied to classifying these four user attributes. It also includes extensive analysis of features and approaches that are effective and not effective in classifying user attributes in Twitter-style informal written genres as distinct from the other primarily spoken genres previously studied in the user-property classification literature. Our models, singly and in ensemble, significantly outperform baseline models in all cases. A detailed analysis of model components and features provides an often entertaining insight into distinctive language-usage variation across gender, age, regional origin and political orientation in modern informal communication.", "Follower count is important to Twitter users: it can indicate popularity and prestige. Yet, holistically, little is understood about what factors -- like social behavior, message content, and network structure - lead to more followers. Such information could help technologists design and build tools that help users grow their audiences. In this paper, we study 507 Twitter users and a half-million of their tweets over 15 months. Marrying a longitudinal approach with a negative binomial auto-regression model, we find that variables for message content, social behavior, and network structure should be given equal consideration when predicting link formations on Twitter. To our knowledge, this is the first longitudinal study of follow predictors, and the first to show that the relative contributions of social behavior and mes-sage content are just as impactful as factors related to social network structure for predicting growth of online social networks. We conclude with practical and theoretical implications for designing social media technologies." ] }
1904.11533
2941729634
The increasing reliance upon cloud services entails more flexible networks that are realized by virtualized network equipment and functions. When such advanced network systems face a massive failure by natural disasters or attacks, the recovery of the entire systems may be conducted in a progressive way due to limited repair resources. The prioritization of network equipment in the recovery phase influences the interim computation and communication capability of systems, since the systems are operated under partial functionality. Hence, finding the best recovery order is a critical problem, which is further complicated by virtualization due to dependency among network nodes and layers. This paper deals with a progressive recovery problem under limited resources in networks with VNFs, where some dependent network layers exist. We prove the NP-hardness of the progressive recovery problem and approach the optimum solution by introducing DeepPR, a progressive recovery technique based on deep reinforcement learning. Our simulation results indicate that DeepPR can obtain 98.4 of the theoretical optimum in certain networks. The results suggest the applicability of Deep RL to more general progressive recovery problems and similar intractable resource allocation problems.
Pioneering work @cite_11 on the progressive recovery problem focuses on determining the recovery order of communication links that maximizes the amount of flows on the recovered network with limited resources. As an extension, the work @cite_2 proposes node evaluation indices to decide the recovery order to maximize the number of virtual networks accommodated. Considering the necessity of monitoring to observe failure situations, the joint problem of progressive recovery and monitor placement is discussed in @cite_16 .
{ "cite_N": [ "@cite_16", "@cite_2", "@cite_11" ], "mid": [ "2763882471", "2305806375", "2167087622" ], "abstract": [ "After a massive scale failure, the assessment of damages to communication networks requires local interventions and remote monitoring. While previous works on network recovery require complete knowledge of damage extent, we address the problem of damage assessment and critical service restoration in a joint manner. We propose a polynomial algorithm called Centrality based Damage Assessment and Recovery (CeDAR) which performs a joint activity of failure monitoring and restoration of network components. CeDAR works under limited availability of recovery resources and optimizes service recovery over time. We modified two existing approaches to the problem of network recovery to make them also able to exploit incremental knowledge of the failure extent. Through simulations we show that CeDAR outperforms the previous approaches in terms of recovery resource utilization and accumulative flow over time of the critical services.", "Network virtualization allows users to build cusomized interconnected storage computing configurations for their business needs. Today this capability is being widely used to improve the scalability and reliability of cloud-based services, including virtual infrastructures services. However, as more and more business-critical applications migrate to the cloud, disaster recovery is now a major concern. Although some studies have looked at network virtualization design under such scenarios, most have only studied pre-fault protection provisioning. Indeed, there is a pressing need to address post-fault recovery, since physical infrastructure repairs will likely occur in a staged progressive manner due to constraints on available resources. Hence this paper studies progressive recovery design for network virtualization and evaluates several heuristic strategies.", "A major disruption may affect many network components and significantly lower the capacity of a network measured in terms of the maximum total flow among a set of source-destination pairs. Since only a subset of the failed components may be repaired at a time due to e.g., limited availability of repair resources, the network capacity can only be progressively increased over time by following a recovery process that involves multiple recovery stages. Different recovery processes will restore the failed components in different orders, and accordingly, result in different amount of network capacity increase after each stage. This paper aims to investigate how to optimally recover the network capacity progressively, or in other words, to determine the optimal recovery process, subject to limited available repair resources. We formulate the optimization problem, analyze its computational complexity, devise solution schemes, and conduct numerical experiments to evaluate the algorithms. The concept of progressive network recovery proposed in this paper represents a paradigm-shift in the field of resilient and survivable networking to handle large-scale failures, and will motivate a rich body of research in network design and other applications." ] }
1904.11533
2941729634
The increasing reliance upon cloud services entails more flexible networks that are realized by virtualized network equipment and functions. When such advanced network systems face a massive failure by natural disasters or attacks, the recovery of the entire systems may be conducted in a progressive way due to limited repair resources. The prioritization of network equipment in the recovery phase influences the interim computation and communication capability of systems, since the systems are operated under partial functionality. Hence, finding the best recovery order is a critical problem, which is further complicated by virtualization due to dependency among network nodes and layers. This paper deals with a progressive recovery problem under limited resources in networks with VNFs, where some dependent network layers exist. We prove the NP-hardness of the progressive recovery problem and approach the optimum solution by introducing DeepPR, a progressive recovery technique based on deep reinforcement learning. Our simulation results indicate that DeepPR can obtain 98.4 of the theoretical optimum in certain networks. The results suggest the applicability of Deep RL to more general progressive recovery problems and similar intractable resource allocation problems.
recovery datacenter @cite_23 (reachability to contents) prediction of cascade by learning - @cite_24 TCOM opt algo for convex @cite_9 analytical solution
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_23" ], "mid": [ "2807732428", "2808235384", "2396675639" ], "abstract": [ "The vulnerability of interdependent networks has recently drawn much attention, especially in the key infrastructure networks such as power and communication networks. However, the existing works mainly considered a single cascade model across the networks and there is a need for more accurate models and analysis. In this paper, we focus on the interdependent power communication networks to accurately analyze their vulnerability by considering heterogeneous cascade models. Accurately analyzing interdependent networks is challenging as the cascades are heterogeneous yet interdependent. Also, including multiple timescales into the context can further increase the complexity. To better depict the vulnerability of interdependent networks, we first propose a method to learn a threshold model from historical data to characterize the cascades in the power network and alleviate the need of calculating complicated power network dynamics. Next, we introduce message passing equations to generalize the threshold model in the power network and the percolation model in the communication network, based on which we derive efficient solution for finding the most critical nodes in the interdependent networks. Removing the most critical nodes can cause the largest cascade and thus characterizes the vulnerability. We evaluate the performance of the proposed methods in various datasets and discuss how network parameters, such as the timescales, can impact the vulnerability.", "This paper treats the problem of optimal resource allocation over time in a finite-horizon setting, in which the resource become available only sequentially and in incremental values, and the utility function is concave and can freely vary over time. Such resource allocation problems have direct applications in data communication networks (e.g., energy harvesting systems). This problem is studied extensively for special choices of the concave utility function (time invariant and logarithmic) in which case the optimal resource allocation policies are well-understood. This paper treats this problem in its general form and analytically characterizes the structure of the optimal resource allocation policy and devises an algorithm for computing the exact solutions analytically. An observation instrumental to devising the provided algorithm is that there exist time instances at which the available resources are exhausted, with no carryover to future. This algorithm identifies all such instances, which in turn, facilitates breaking the original problem into multiple problems with significantly reduced dimensions. Furthermore, some widely used special cases in which the algorithm takes simpler structures are characterized, and the application to the energy harvesting systems is discussed. Numerical evaluations are provided to assess the key properties of the optimal resource allocation structure and to compare the performance with the generic convex optimization algorithms.", "Today's cloud system are composed of geographically distributed datacenter interconnected by high-speed optical networks. Disaster failures can severely affect both the communication network as well as datacenters infrastructure and prevent users from accessing cloud services. After large-scale disasters, recovery efforts on both network and datacenters may take days, and, in some cases, weeks or months. Traditionally, the repair of the communication network has been treated as a separate problem from the repair of datacenters. While past research has mostly focused on network recovery, how to efficiently recover a cloud system jointly considering the limited computing and networking resources has been an important and open research problem. In this work, we investigate the problem of progressive datacenter recovery after a large-scale disaster failure, given that a network-recovery plan is made. An efficient recovery plan is explored to determine which datacenters should be recovered at each recovery stage to maximize cumulative content reachability from any source considering limited available network resources. We devise an Integer Linear Program (ILP) formulation to model the associated optimization problem. Our numerical examples using the ILP show that an efficient progressive datacenter-recovery plan can significantly help to increase reachability of contents during the network recovery phase. We succeeded in increasing the number of important contents in the early stages of recovery compared to a random-recovery strategy with a slight increase in resource consumption." ] }
1904.11533
2941729634
The increasing reliance upon cloud services entails more flexible networks that are realized by virtualized network equipment and functions. When such advanced network systems face a massive failure by natural disasters or attacks, the recovery of the entire systems may be conducted in a progressive way due to limited repair resources. The prioritization of network equipment in the recovery phase influences the interim computation and communication capability of systems, since the systems are operated under partial functionality. Hence, finding the best recovery order is a critical problem, which is further complicated by virtualization due to dependency among network nodes and layers. This paper deals with a progressive recovery problem under limited resources in networks with VNFs, where some dependent network layers exist. We prove the NP-hardness of the progressive recovery problem and approach the optimum solution by introducing DeepPR, a progressive recovery technique based on deep reinforcement learning. Our simulation results indicate that DeepPR can obtain 98.4 of the theoretical optimum in certain networks. The results suggest the applicability of Deep RL to more general progressive recovery problems and similar intractable resource allocation problems.
The fragility induced by dependency between network layers has been pointed out in the context of interdependent network research @cite_14 @cite_4 @cite_25 . In particular, the interdependency between virtualized nodes and physical nodes in optical networks is considered in @cite_14 . A similar dependency caused by VNF orchestration is discussed in @cite_4 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_25" ], "mid": [ "2211508709", "2464214790", "1966811395" ], "abstract": [ "Software-defined networking (SDN) has been proposed as a next-generation control and management framework, facilitating network programmability to address emerging dynamic application requirements. The separation of control and data planes in SDN demands the synergistic operation of the two entities for globally optimized performance. We identify the collaboration of the control plane and the data plane in software-defined optical transmission systems as a cyber-physical interdependency where the \"physical\" fiber network provides the “cyber” control network with means to distribute control and signaling messages and in turn is itself operated by these \"cyber\" control messages. We focus on the cyber-physical interdependency in SDN optical transmission from a network robustness perspective and characterize cascading failure behaviors. Our analysis suggests that topological properties pose a significant impact on failure extensibility. We further evaluate the effectiveness of optical layer reconfigurability in improving the resilience of SDN controlled optical transmission systems.", "The concept of NFV appears to be a promising direction to save cellular network service providers from endlessly increasing capital investment, given the fast evolving mobile broadband communication techniques and unprecedented consumer demand for quality of service and quality of experience in mobile access. Meanwhile, given the deployment of NFV, the virtualized network functions and the physical hardware resources are still vulnerable to natural disasters and malicious attacks. We present in this article the first framework for reliability evaluation of NFV deployment and specific algorithms to efficiently determine the key set of physical or logical nodes there.", "Modern systems are increasingly dependent upon and interacting with each other, and become interdependent networks. These interdependent networks may exhibit some interesting and even surprising behaviors due to the interdependency and the interplay between the constituent systems. In this article we focus on two important phenomena, namely cascading failure in cyber-physical systems (CPS) and information cascade in coupled social networks. Specifically, cascading failures may occur in CPS that exhibit functional interdependency between two constituent systems (e.g. smart grid); information cascade may happen in multiple social networks that are coupled together by so-called multi-membership individuals. This article explores these two types of cascading effects in interdependent networks by reviewing existing studies in the literature. We review different models in the literature to study the two types of cascading effects in interdependent networks, and highlight the key findings from these studies." ] }
1904.11533
2941729634
The increasing reliance upon cloud services entails more flexible networks that are realized by virtualized network equipment and functions. When such advanced network systems face a massive failure by natural disasters or attacks, the recovery of the entire systems may be conducted in a progressive way due to limited repair resources. The prioritization of network equipment in the recovery phase influences the interim computation and communication capability of systems, since the systems are operated under partial functionality. Hence, finding the best recovery order is a critical problem, which is further complicated by virtualization due to dependency among network nodes and layers. This paper deals with a progressive recovery problem under limited resources in networks with VNFs, where some dependent network layers exist. We prove the NP-hardness of the progressive recovery problem and approach the optimum solution by introducing DeepPR, a progressive recovery technique based on deep reinforcement learning. Our simulation results indicate that DeepPR can obtain 98.4 of the theoretical optimum in certain networks. The results suggest the applicability of Deep RL to more general progressive recovery problems and similar intractable resource allocation problems.
Progressive recovery problems in interdependent networks have been discussed in @cite_20 @cite_15 @cite_17 @cite_19 . Classifying the progressive recovery problems by the types of interdependency, the work @cite_20 proposes the optimum algorithm for a special case and heuristics for other cases. ILP and DP-based algorithms are employed to solve a variant of the progressive recovery problem in @cite_15 .
{ "cite_N": [ "@cite_19", "@cite_15", "@cite_20", "@cite_17" ], "mid": [ "2118714517", "2585594748", "2505737590", "" ], "abstract": [ "Modern society depends on the operations of civil infrastructure systems, such as transportation, energy, telecommunications, and water. These systems have become so interconnected, one relying on another, that disruption of one may lead to disruptions in all. The approach taken in this research is to model these systems by explicitly identifying these interconnections or interdependencies. Definitions of five types of infrastructure inter-dependencies are presented and incorporated into a network flows mathematical representation, i.e., an interdependent layer network model. Using the lower Manhattan region of New York, for illustrative purposes, the implementation of the model is shown. First, the data requirements are presented with realistic data on the interdependent infrastructure systems of power, telecommunications, and subways. Next, a scenario is given that causes major disruption in the services provided by these infrastructures and demonstrates the use of the model in guiding restoration of services. The paper concludes with a discussion of accomplishments and opportunities for future work.", "This paper studies how to determine an optimal order of recovering interdependent Cyber Physical Systems (CPS) after a large scale failure. In such a CPS, some failed devices must be repaired first before others can. In addition, such failed devices require a certain amount of repair resources and may take multiple stages to repair. We consider two scenarios: 1) reserved model where all the required repair resources should be prepared at the beginning of repairing a device; and 2) opportunistic model where we can partially repair a device with only part of the required resources. For each scenario, we model it using an Integer Linear Programming (ILP) and use a relaxation and rounding method to design an ILP based algorithm. In addition, we also design a Dynamic Programming (DP) based algorithm. Simulation results show that ILP based algorithm outperforms DP based algorithm by 10 -20 in systems with less than 200 failed devices, but DP based algorithm can support extreme large size systems with more than 5000 failed devices.", "A number of models have been proposed to analyze interdependent networks in recent years. However most of the models are unable to capture the complex interdependencies between such networks. To overcome the limitations, we have recently proposed a new model. Utilizing this model, we provide techniques for progressive recovery from failure. The goal of the progressive recovery problem is to maximize the system utility over the entire duration of the recovery process. We show that the problem can be solved in polynomial time in some special cases, whereas for some others, the problem is NP-complete. We provide two approximation algorithms with performance bounds of 2 and 4 respectively. We provide an optimal solution utilizing Integer Linear Programming and a heuristic. We evaluate the efficacy of our heuristic with both synthetic and real data collected from Phoenix metropolitan area. The experiments show that our heuristic almost always produces near optimal solution.", "" ] }
1904.11435
2940760052
A cryptocurrency is a decentralized digital currency that is designed for secure and private asset transfer and storage. As a currency, it should be difficult to counterfeit and double-spend. In this paper, we review and analyze the major security and privacy issues of Bitcoin. In particular, we focus on its underlying foundation, blockchain technology. First, we present a comprehensive background of Bitcoin and the preliminary on security. Second, the major security threats and countermeasures of Bitcoin are investigated. We analyze the risk of double-spending attacks, evaluate the probability of success in performing the attacks and derive the profitability for the attacker to perform such attacks. Third, we analyze the underlying Bitcoin peer-to-peer network security risks and Bitcoin storage security. We compare three types of Bitcoin wallets in terms of security, type of services and their trade-offs. Finally, we discuss the security and privacy features of alternative cryptocurrencies and present an overview of emerging technologies today. Our results can help Bitcoin users to determine a trade-off between the risk of double-spending attempts and the transaction time delay or confidence before accepting transactions. These results can also assist miners to develop suitable strategies to get involved in the mining process and maximize their profits.
Research in digital cash dates back to the early 1980s @cite_18 . In 1990, DigiCash Inc., an electronic cash corporation, made an initial attempt to provide a cryptocurrency system @cite_75 . DigiCash transactions involved cryptographic protocols and aimed at providing its users with anonymity. However,it failed in 2000 as the Internet bubble popped despite being attractive initially. David Chaum, its founder, believes the failure of DigiCash to succeed was tied to its technology which preceded the e-commerce maturation within the Internet. Other reasons which led to its failure included the cooperation of banks to process a transaction, making DigiCash a centralized system.
{ "cite_N": [ "@cite_18", "@cite_75" ], "mid": [ "1601001795", "1535861450" ], "abstract": [ "Automation of the way we pay for goods and services is already underway, as can be seen by the variety and growth of electronic banking services available to consumers. The ultimate structure of the new electronic payments system may have a substantial impact on personal privacy as well as on the nature and extent of criminal use of payments. Ideally a new payments system should address both of these seemingly conflicting sets of concerns.", "An electronic cash protocol including the steps of using a one-way function f1 (x) to generate an image f1 (x1) from a preimage x1 ; sending the image f1 (x1) in an unblinded form to a second party; and receiving from the second party a note including a digital signature, wherein the note represents a commitment by the second party to credit a predetermined amount of money to a first presenter of the preimage x1 to the second party." ] }
1904.11435
2940760052
A cryptocurrency is a decentralized digital currency that is designed for secure and private asset transfer and storage. As a currency, it should be difficult to counterfeit and double-spend. In this paper, we review and analyze the major security and privacy issues of Bitcoin. In particular, we focus on its underlying foundation, blockchain technology. First, we present a comprehensive background of Bitcoin and the preliminary on security. Second, the major security threats and countermeasures of Bitcoin are investigated. We analyze the risk of double-spending attacks, evaluate the probability of success in performing the attacks and derive the profitability for the attacker to perform such attacks. Third, we analyze the underlying Bitcoin peer-to-peer network security risks and Bitcoin storage security. We compare three types of Bitcoin wallets in terms of security, type of services and their trade-offs. Finally, we discuss the security and privacy features of alternative cryptocurrencies and present an overview of emerging technologies today. Our results can help Bitcoin users to determine a trade-off between the risk of double-spending attempts and the transaction time delay or confidence before accepting transactions. These results can also assist miners to develop suitable strategies to get involved in the mining process and maximize their profits.
In early 2000, Digital Gold Currency (DGC), a currency backed by gold, gained some popularity. DGC is considered to be a second-generation digital currency. It is issued by some companies that enable users to pay each other in units similar to those of gold bullion. Examples include iGolder, gbullion, and e-Gold. Although DGC seemed to have a bright future, it lost popularity due to its centralized structure. Politics may have also played a role in its declining popularity. Companies that provided DGC were forced to shut down by the federal government due to their inability to comply with the government regulations @cite_41 .
{ "cite_N": [ "@cite_41" ], "mid": [ "2409619325" ], "abstract": [ "This comprehensive source of information about financial fraud delivers a mature approach to fraud detection and prevention. It brings together all important aspect of analytics used in investigating modern crime in financial markets and uses R for its statistical examples. It focuses on crime in financial markets as opposed to the financial industry, and it highlights technical aspects of crime detection and prevention as opposed to their qualitative aspects. For those with strong analytic skills, this book unleashes the usefulness of powerful predictive and prescriptive analytics in predicting and preventing modern crime in financial markets. Interviews and case studies provide context and depth to examples Case studies use R, the powerful statistical freeware tool Useful in classroom and professional contexts" ] }
1904.11435
2940760052
A cryptocurrency is a decentralized digital currency that is designed for secure and private asset transfer and storage. As a currency, it should be difficult to counterfeit and double-spend. In this paper, we review and analyze the major security and privacy issues of Bitcoin. In particular, we focus on its underlying foundation, blockchain technology. First, we present a comprehensive background of Bitcoin and the preliminary on security. Second, the major security threats and countermeasures of Bitcoin are investigated. We analyze the risk of double-spending attacks, evaluate the probability of success in performing the attacks and derive the profitability for the attacker to perform such attacks. Third, we analyze the underlying Bitcoin peer-to-peer network security risks and Bitcoin storage security. We compare three types of Bitcoin wallets in terms of security, type of services and their trade-offs. Finally, we discuss the security and privacy features of alternative cryptocurrencies and present an overview of emerging technologies today. Our results can help Bitcoin users to determine a trade-off between the risk of double-spending attempts and the transaction time delay or confidence before accepting transactions. These results can also assist miners to develop suitable strategies to get involved in the mining process and maximize their profits.
In 1991, the first secure blockchain was proposed by Stuart Haber and W. Scott Stornetta @cite_87 . Their blockchain aimed at certifying the creation or modification of a digital record by digitally time-stamping the record being processed. However, the blockchain was not efficient since each record was independently time-stamped. To improve the efficiency, Merkle trees @cite_95 were incorporated into blockchains in 1992 @cite_49 . They improved the efficiency by handling multiple digital records into one block. Finally, Satoshi Nakamoto implemented the first real blockchain and used it as the core technology for the Bitcoin cryptocurrency system.
{ "cite_N": [ "@cite_95", "@cite_49", "@cite_87" ], "mid": [ "319917506", "1572490578", "1997911919" ], "abstract": [ "", "To establish that a document was created after a given moment in time, it is necessary to report events that could not have been predicted before they happened. To establish that a document was created before a given moment in time, it is necessary to cause an event based on the document, which can be observed by others. Cryptographic hash functions can be used both to report events succinctly, and to cause events based on documents without revealing their contents. Haber and Stornetta have proposed two schemes for digital time-stamping which rely on these principles [HaSt 91].", "The prospect of a world in which all text, audio, picture, and video documents are in digital form on easily modifiable media raises the issue of how to certify when a document was created or last changed. The problem is to time-stamp the data, not the medium. We propose computationally practical procedures for digital time-stamping of such documents so that it is infeasible for a user either to back-date or to forward-date his document, even with the collusion of a time-stamping service. Our procedures maintain complete privacy of the documents themselves, and require no record-keeping by the time-stamping service." ] }
1904.11492
2941956444
The Non-Local Network (NLNet) presents a pioneering approach for capturing long-range dependencies, via aggregating query-specific global context to each query position. However, through a rigorous empirical analysis, we have found that the global contexts modeled by non-local network are almost the same for different query positions within an image. In this paper, we take advantage of this finding to create a simplified network based on a query-independent formulation, which maintains the accuracy of NLNet but with significantly less computation. We further observe that this simplified design shares similar structure with Squeeze-Excitation Network (SENet). Hence we unify them into a three-step general framework for global context modeling. Within the general framework, we design a better instantiation, called the global context (GC) block, which is lightweight and can effectively model the global context. The lightweight property allows us to apply it for multiple layers in a backbone network to construct a global context network (GCNet), which generally outperforms both simplified NLNet and SENet on major benchmarks for various recognition tasks. The code and configurations are released at this https URL.
Self-attention mechanisms have recently been successfully applied in various tasks, such as machine translation @cite_42 @cite_35 @cite_17 , graph embedding @cite_30 , generative modeling @cite_38 , and visual recognition @cite_26 @cite_8 @cite_16 @cite_2 . @cite_17 is one of the first attempts to apply a self-attention mechanism to model long-range dependencies in machine translation. @cite_8 extends self-attention mechanisms to model the relations between objects in object detection. NLNet @cite_16 adopts self-attention mechanisms to model the pixel-level pairwise relations. CCNet @cite_24 accelerates NLNet via stacking two criss-cross blocks, and is applied to semantic segmentation. However, NLNet actually learns query-independent attention maps for each query position, which is a waste of computation cost to model pixel-level pairwise relations.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_38", "@cite_26", "@cite_8", "@cite_42", "@cite_24", "@cite_2", "@cite_16", "@cite_17" ], "mid": [ "2766453196", "2964265128", "2950893734", "2963495494", "2964080601", "2950855294", "2902930830", "2890782586", "", "2963403868" ], "abstract": [ "We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).", "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training to better exploit the GPU hardware and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.*", "In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.", "In this work, we propose Residual Attention Network, a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90 error), CIFAR-100 (20.45 error) and ImageNet (4.8 single model and single crop, top-5 error). Note that, our method achieves 0.6 top-1 accuracy improvement with 46 trunk depth and 69 forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.", "Although it is well believed for years that modeling relations between objects would help object recognition, there has not been evidence that the idea is working in the deep learning era. All state-of-the-art object detection systems still rely on recognizing object instances individually, without exploiting their relations during learning. This work proposes an object relation module. It processes a set of objects simultaneously through interaction between their appearance feature and geometry, thus allowing modeling of their relations. It is lightweight and in-place. It does not require additional supervision and is easy to embed in existing networks. It is shown effective on improving object recognition and duplicate removal steps in the modern object detection pipeline. It verifies the efficacy of modeling object relations in CNN based detection. It gives rise to the first fully end-to-end object detector.", "The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. In this paper we present a faster and simpler architecture based on a succession of convolutional layers. This allows to encode the entire source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies. On WMT'16 English-Romanian translation we achieve competitive accuracy to the state-of-the-art and we outperform several recently published results on the WMT'15 English-German task. Our models obtain almost the same accuracy as a very deep LSTM setup on WMT'14 English-French translation. Our convolutional encoder speeds up CPU decoding by more than two times at the same or higher accuracy as a strong bi-directional LSTM baseline.", "Long-range dependencies can capture useful contextual information to benefit visual understanding problems. In this work, we propose a Criss-Cross Network (CCNet) for obtaining such important information through a more effective and efficient way. Concretely, for each pixel, our CCNet can harvest the contextual information of its surrounding pixels on the criss-cross path through a novel criss-cross attention module. By taking a further recurrent operation, each pixel can finally capture the long-range dependencies from all pixels. Overall, our CCNet is with the following merits: 1) GPU memory friendly. Compared with the non-local block, the recurrent criss-cross attention module requires @math less GPU memory usage. 2) High computational efficiency. The recurrent criss-cross attention significantly reduces FLOPs by about 85 of the non-local block in computing long-range dependencies. 3) The state-of-the-art performance. We conduct extensive experiments on popular semantic segmentation benchmarks including Cityscapes, ADE20K, and instance segmentation benchmark COCO. In particular, our CCNet achieves the mIoU score of 81.4 and 45.22 on Cityscapes test set and ADE20K validation set, respectively, which are the new state-of-the-art results. We make the code publicly available at this https URL .", "In this paper, we address the problem of scene parsing with deep learning and focus on the context aggregation strategy for robust segmentation. Motivated by that the label of a pixel is the category of the object that the pixel belongs to, we introduce an scheme, which represents each pixel by exploiting the set of pixels that belong to the same object category with such a pixel, and we call the set of pixels as object context. Our implementation, inspired by the self-attention approach, consists of two steps: (i) compute the similarities between each pixel and all the pixels, forming a so-called object context map for each pixel served as a surrogate for the true object context, and (ii) represent the pixel by aggregating the features of all the pixels weighted by the similarities. The resulting representation is more robust compared to existing context aggregation schemes, e.g., pyramid pooling modules (PPM) in PSPNet and atrous spatial pyramid pooling (ASPP), which do not differentiate the context pixels belonging to the same object category or not, making the reliability of contextually aggregated representations limited. We empirically demonstrate our approach and two pyramid extensions with state-of-the-art performance on three semantic segmentation benchmarks: Cityscapes, ADE20K and LIP. Code has been made available at: this https URL.", "", "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature." ] }
1904.11492
2941956444
The Non-Local Network (NLNet) presents a pioneering approach for capturing long-range dependencies, via aggregating query-specific global context to each query position. However, through a rigorous empirical analysis, we have found that the global contexts modeled by non-local network are almost the same for different query positions within an image. In this paper, we take advantage of this finding to create a simplified network based on a query-independent formulation, which maintains the accuracy of NLNet but with significantly less computation. We further observe that this simplified design shares similar structure with Squeeze-Excitation Network (SENet). Hence we unify them into a three-step general framework for global context modeling. Within the general framework, we design a better instantiation, called the global context (GC) block, which is lightweight and can effectively model the global context. The lightweight property allows us to apply it for multiple layers in a backbone network to construct a global context network (GCNet), which generally outperforms both simplified NLNet and SENet on major benchmarks for various recognition tasks. The code and configurations are released at this https URL.
To model the global context features, SENet @cite_29 , GENet @cite_23 , and PSANet @cite_32 perform rescaling to different channels to recalibrate the channel dependency with global context. CBAM @cite_36 recalibrates the importance of different spatial positions and channels both via rescaling. However, all these methods adopt rescaling for feature fusion which is not effective enough for global context modeling.
{ "cite_N": [ "@cite_36", "@cite_29", "@cite_32", "@cite_23" ], "mid": [ "2884585870", "", "2895340641", "2963984455" ], "abstract": [ "We propose Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks. Given an intermediate feature map, our module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. Because CBAM is a lightweight and general module, it can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs. We validate our CBAM through extensive experiments on ImageNet-1K, MS COCO detection, and VOC 2007 detection datasets. Our experiments show consistent improvements in classification and detection performances with various models, demonstrating the wide applicability of CBAM. The code and models will be publicly available.", "", "We notice information flow in convolutional neural networks is restricted inside local neighborhood regions due to the physical design of convolutional filters, which limits the overall understanding of complex scenes. In this paper, we propose the point-wise spatial attention network (PSANet) to relax the local neighborhood constraint. Each position on the feature map is connected to all the other ones through a self-adaptively learned attention mask. Moreover, information propagation in bi-direction for scene parsing is enabled. Information at other positions can be collected to help the prediction of the current position and vice versa, information at the current position can be distributed to assist the prediction of other ones. Our proposed approach achieves top performance on various competitive scene parsing datasets, including ADE20K, PASCAL VOC 2012 and Cityscapes, demonstrating its effectiveness and generality.", "While the use of bottom-up local operators in convolutional neural networks (CNNs) matches well some of the statistics of natural images, it may also prevent such models from capturing contextual long-range feature interactions. In this work, we propose a simple, lightweight approach for better context exploitation in CNNs. We do so by introducing a pair of operators: gather, which efficiently aggregates feature responses from a large spatial extent, and excite, which redistributes the pooled information to local features. The operators are cheap, both in terms of number of added parameters and computational complexity, and can be integrated directly in existing architectures to improve their performance. Experiments on several datasets show that gather-excite can bring benefits comparable to increasing the depth of a CNN at a fraction of the cost. For example, we find ResNet-50 with gather-excite operators is able to outperform its 101-layer counterpart on ImageNet with no additional learnable parameters. We also propose a parametric gather-excite operator pair which yields further performance gains, relate it to the recently-introduced Squeeze-and-Excitation Networks, and analyse the effects of these changes to the CNN feature activation statistics." ] }
1904.11520
2940834119
This paper presents a technique that combines the occurrence of certain events, as observed by different sensors, in order to detect and classify objects. This technique explores the extent of dependence between features being observed by the sensors, and generates more informed probability distributions over the events. Provided some additional information about the features of the object, this fusion technique can outperform other existing decision level fusion approaches that may not take into account the relationship between different features. Furthermore, this paper also addresses the issue of dealing with damaged sensors during implementation of the model, by learning a hidden space between sensor modalities which can be exploited to safeguard detection performance.
Another approach to combining information from various sources is to use Dempster-Shafer Inference. Dempster-Shafer Inference can assign a probability to any of the original @math objects or to a union of these objects. The knowledge of the @math sensor is summarized in its report, @math . Where, @math and @math . Due to lack of evidence, the probability 1 may not be completely assigned to any object or unions of objects, bringing an uncertainty in the report. The probability, @math is therefore called the probability of uncertainty. The sensor reports, @math are fused to find the final fused report, @math , Where, Dempster-Shafer rule for fusion suffers from exponentially increasing complexity as @math and @math increase. Some applications of Dempster-Shafer fusion can be found in @cite_14 , where LIDAR data is combined with multi-spectral imagery, and in @cite_19 , where multi-sensor information like vibration, sound, pressure, and temperature is fused to detect engine faults. Furthermore, @cite_5 provides a detailed comparison between Bayesian Inference and Dempster-Shafer Theory.
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_14" ], "mid": [ "1981673219", "2311097431", "2067644416" ], "abstract": [ "Engine diagnostics is a typical multi-sensor fusion problem. It involves the use of multi-sensor information such as vibration, sound, pressure and temperature, to detect and identify engine faults. From the viewpoint of evidence theory, information obtained from each sensor can be considered as a piece of evidence, and as such, multi-sensor based engine diagnosis can be viewed as a problem of evidence fusion. In this paper we investigate the use of Dempster-Shafer evidence theory as a tool for modeling and fusing multi-sensory pieces of evidence pertinent to engine quality. We present a preliminary review of Evidence Theory and explain how the multi-sensor engine diagnosis problem can be framed in the context of this theory, in terms of faults frame of discernment, mass functions and the rule for combining pieces of evidence. We introduce two new methods for enhancing the effectiveness of mass functions in modeling and combining pieces of evidence. Furthermore, we propose a rule for making rational decisions with respect to engine quality, and present a criterion to evaluate the performance of the proposed information fusion system. Finally, we report a case study to demonstrate the efficacy of this system in dealing with imprecise information cues and conflicts that may arise among the sensors.", "This paper demonstrates how Bayesian and evidential reasoning can address the same target identification problem involving multiple levels of abstraction, such as identification based on type, class, and nature. In the process of demonstrating target identification with these two reasoning methods, we compare their convergence time to a long run asymptote for a broad range of aircraft identification scenarios that include missing reports and misassociated reports. Our results show that probability theory can accommodate all of these issues that are present in dealing with uncertainty and that the probabilistic results converge to a solution much faster than those of evidence theory.", "A method for the classification of land cover in urban areas by the fusion of first and last pulse LIDAR data and multi-spectral images is presented. Apart from buildings, the classes \"tree\", \"grass land\", and \"bare soil\" are also distinguished by a classification method based on the theory of Dempster-Shafer for data fusion. Examples are given for a test site in Germany." ] }
1904.11395
2941959975
We consider the problem of transforming a given graph @math into a desired graph @math by applying a minimum number primitives from a particular set of local graph transformation primitives. These primitives are local in the sense that each node can apply them based on local knowledge and by affecting only its @math -neighborhood. Although the specific set of primitives we consider makes it possible to transform any (weakly) connected graph into any other (weakly) connected graph consisting of the same nodes, they cannot disconnect the graph or introduce new nodes into the graph, making them ideal in the context of supervised overlay network transformations. We prove that computing a minimum sequence of primitive applications (even centralized) for arbitrary @math and @math is NP-hard, which we conjecture to hold for any set of local graph transformation primitives satisfying the aforementioned properties. On the other hand, we show that this problem admits a polynomial time algorithm with a constant approximation ratio.
Our approximation algorithms use an approximation algorithm for the Undirected Steiner Forest Problem as a black-box (also known as the Steiner Subgraph Problem with edge sharing, or, in generalizations, the Survivable Network Design Problem or the Generalized Steiner Problem). 2-approximations of this problem were first given by Agrawal, Klein, and Ravi @cite_23 , and by Goemans and Williamson @cite_6 , and later also by Jain @cite_13 . Gupta and Kumar @cite_11 showed a simple greedy algorithm to have a constant approximation ratio and recently, Gro et al @cite_14 presented a local-search constant approximation for Steiner Forest.
{ "cite_N": [ "@cite_14", "@cite_6", "@cite_23", "@cite_13", "@cite_11" ], "mid": [ "2963996640", "2001139415", "2011823863", "2032349632", "2120899253" ], "abstract": [ "In the Steiner Forest problem, we are given a graph and a collection of source-sink pairs, and the goal is to find a subgraph of minimum total length such that all pairs are connected. The problem is APX-Hard and can be 2-approximated by, e.g., the elegant primal-dual algorithm of Agrawal, Klein, and Ravi from 1995. We give a local-search-based constant-factor approximation for the problem. Local search brings in new techniques to an area that has for long not seen any improvements and might be a step towards a combinatorial algorithm for the more general survivable network design problem. Moreover, local search was an essential tool to tackle the dynamic MST Steiner Tree problem, whereas dynamic Steiner Forest is still wide open. It is easy to see that any constant factor local search algorithm requires steps that add drop many edges together. We propose natural local moves which, at each step, either (a) add a shortest path in the current graph and then drop a bunch of inessential edges, or (b) add a set of edges to the current solution. This second type of moves is motivated by the potential function we use to measure progress, combining the cost of the solution with a penalty for each connected component. Our carefully-chosen local moves and potential function work in tandem to eliminate bad local minima that arise when using more traditional local moves. Our analysis first considers the case where the local optimum is a single tree, and shows optimality w.r.t. moves that add a single edge (and drop a set of edges) is enough to bound the locality gap. For the general case, we show how to \"project\" the optimal solution onto the different trees of the local optimum without incurring too much cost (and this argument uses optimality w.r.t. both kinds of moves), followed by a tree-by-tree argument. We hope both the potential function, and our analysis techniques will be useful to develop and analyze local-search algorithms in other contexts.", "We present a general approximation technique for a large class of graph problems. Our technique mostly applies to problems of covering, at minimum cost, the vertices of a graph with trees, cycles or paths satisfying certain requirements. In particular, many basic combinatorial optimization problems fit in this framework, including the shortest path, minimum-cost spanning tree, minimum-weight perfect matching, traveling salesman and Steiner tree problems. Our technique produces approximation algorithms that run in @math time and come within a factor of 2 of optimal for most of these problems. For instance, we obtain a 2-approximation algorithm for the minimum-weight perfect matching problem under the triangle inequality. Our running time of @math time compares favorably with the best strongly polynomial exact algorithms running in @math time for dense graphs. A similar result is obtained for the 2-matching problem and its variants. We also derive the first approximation algorithms for many NP-complete problems, including the non-fixed point-to-point connection problem, the exact path partitioning problem and complex location-design problems. Moreover, for the prize-collecting traveling salesman or Steiner tree problems, we obtain 2-approximation algorithms, therefore improving the previously best-known performance guarantees of 2.5 and 3, respectively [Math. Programming, 59 (1993), pp. 413--420].", "We give the first approximation algorithm for the generalized network Steiner problem, a problem in network design. An instance consists of a network with link-costs and, for each pair @math of nodes, an edge-connectivity requirement @math . The goal is to find a minimum-cost network using the available links and satisfying the requirements. Our algorithm outputs a solution whose cost is within @math of optimal, where @math is the highest requirement value. In the course of proving the performance guarantee, we prove a combinatorial min-max approximate equality relating minimum-cost networks to maximum packings of certain kinds of cuts. As a consequence of the proof of this theorem, we obtain an approximation algorithm for optimally packing these cuts; we show that this algorithm has application to estimating the reliability of a probabilistic network.", "We study a network creation game recently proposed by Fabrikant, Luthra, Maneva, Papadimitriou and Shenker. In this game, each player (vertex) can create links (edges) to other players at a cost of α per edge. The goal of every player is to minimize the sum consisting of (a) the cost of the links he has created and (b) the sum of the distances to all other players. conjectured that there exists a constant A such that, for any α > A, all non-transient Nash equilibria graphs are trees. They showed that if a Nash equilibrium is a tree, the price of anarchy is constant. In this paper we disprove the tree conjecture. More precisely, we show that for any positive integer n 0 , there exists a graph built by n ≥ n 0 players which contains cycles and forms a non-transient Nash equilibrium, for any α with 1 < α ≤ √n 2. Our construction makes use of some interesting results on finite affine planes. On the other hand we show that, for α ≥ 12n[log n], every Nash equilibrium forms a tree.Without relying on the tree conjecture, proved an upper bound on the price of anarchy of O(√α), where α ∈ [2, n2]. We improve this bound. Specifically, we derive a constant upper bound for α ∈ O(√n) and for α ≥ 12n[log n]. For the intermediate values we derive an improved bound of O(1 + (min α2 n, n2 α )1 3).Additionally, we develop characterizations of Nash equilibria and extend our results to a weighted network creation game as well as to scenarios with cost sharing.", "In the Steiner Forest problem, we are given terminal pairs si, ti, and need to find the cheapest subgraph which connects each of the terminal pairs together. In 1991, Agrawal, Klein, and Ravi gave a primal-dual constant-factor approximation algorithm for this problem. Until this work, the only constant-factor approximations we know are via linear programming relaxations. In this paper, we consider the following greedy algorithm: Given terminal pairs in a metric space, a terminal is active if its distance to its partner is non-zero. Pick the two closest active terminals (say si, tj), set the distance between them to zero, and buy a path connecting them. Recompute the metric, and repeat. It has long been open to analyze this greedy algorithm. Our main result shows that this algorithm is a constant-factor approximation. We use this algorithm to give new, simpler constructions of cost-sharing schemes for Steiner forest. In particular, the first \"group-strict\" cost-shares for this problem implies a very simple combinatorial sampling-based algorithm for stochastic Steiner forest." ] }
1904.11432
2940905688
Current systems used by medical institutions for the management and transfer of Electronic Medical Records (EMR) can be vulnerable to security and privacy threats. In addition, these centralized systems often lack interoperability and give patients limited or no access to their own EMRs. In this paper, we propose a novel distributed data sharing scheme that applies the security benefits of blockchain to handle these concerns. With blockchain, we incorporate smart contracts and a distributed storage system to alleviate the dependence on the record-generating institutions to manage and share patient records. To preserve privacy of patient records, we implement our smart contracts as a method to allow patients to verify attributes prior to granting access rights. Our proposed scheme also facilitates selective sharing of medical records among staff members that belong to different levels of a hierarchical institution. We provide extensive security, privacy, and evaluation analyses to show that our proposed scheme is both efficient and practical.
In 2015, a decentralized data management scheme was introduced that facilitated access-control management over a blockchain @cite_28 . In this system, the actual data records are stored in an off-blockchain storage while pointers to these records are maintained by a key-value storage over the blockchain. This solution helps simplify the amount of data processed on the blockchain. However, the method used to define access policies in this scheme does not consider hierarchical data sharing. A user that desires to share files selectively among multiple users in a hierarchy will have to define several access policies that could become a complex problem as the number of users increases drastically.
{ "cite_N": [ "@cite_28" ], "mid": [ "1559136758" ], "abstract": [ "The recent increase in reported incidents of surveillance and security breaches compromising users' privacy call into question the current model, in which third-parties collect and control massive amounts of personal data. Bit coin has demonstrated in the financial space that trusted, auditable computing is possible using a decentralized network of peers accompanied by a public ledger. In this paper, we describe a decentralized personal data management system that ensures users own and control their data. We implement a protocol that turns a block chain into an automated access-control manager that does not require trust in a third party. Unlike Bit coin, transactions in our system are not strictly financial -- they are used to carry instructions, such as storing, querying and sharing data. Finally, we discuss possible future extensions to block chains that could harness them into a well-rounded solution for trusted computing problems in society." ] }
1904.11432
2940905688
Current systems used by medical institutions for the management and transfer of Electronic Medical Records (EMR) can be vulnerable to security and privacy threats. In addition, these centralized systems often lack interoperability and give patients limited or no access to their own EMRs. In this paper, we propose a novel distributed data sharing scheme that applies the security benefits of blockchain to handle these concerns. With blockchain, we incorporate smart contracts and a distributed storage system to alleviate the dependence on the record-generating institutions to manage and share patient records. To preserve privacy of patient records, we implement our smart contracts as a method to allow patients to verify attributes prior to granting access rights. Our proposed scheme also facilitates selective sharing of medical records among staff members that belong to different levels of a hierarchical institution. We provide extensive security, privacy, and evaluation analyses to show that our proposed scheme is both efficient and practical.
In 2016, MedRec @cite_30 , the first functional electronic medical record-sharing system built on some concepts from @cite_28 was introduced. This work builds on three Ethereum @cite_24 smart contracts that manage the authentication, confidentiality, and accountability during the data sharing process. In this system, the primary entities involved in maintaining the blockchain are the parties interested in gaining data, such as researchers and public health authorities. In return, the institutions are rewarded with access to aggregate and anonymized data. However, the success of such a system is dependent on the participation of entities that maintain the system in return for data. In addition, similar to @cite_28 , MedRec does not consider hierarchical data sharing.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_24" ], "mid": [ "2522448907", "1559136758", "" ], "abstract": [ "Years of heavy regulation and bureaucratic inefficiency have slowed innovation for electronic medical records (EMRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EMRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing -- crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain &#x0022;miners&#x0022;. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this short paper is to expose, prior to field tests, a working prototype through which we analyze and discuss our approach.", "The recent increase in reported incidents of surveillance and security breaches compromising users' privacy call into question the current model, in which third-parties collect and control massive amounts of personal data. Bit coin has demonstrated in the financial space that trusted, auditable computing is possible using a decentralized network of peers accompanied by a public ledger. In this paper, we describe a decentralized personal data management system that ensures users own and control their data. We implement a protocol that turns a block chain into an automated access-control manager that does not require trust in a third party. Unlike Bit coin, transactions in our system are not strictly financial -- they are used to carry instructions, such as storing, querying and sharing data. Finally, we discuss possible future extensions to block chains that could harness them into a well-rounded solution for trusted computing problems in society.", "" ] }
1904.11432
2940905688
Current systems used by medical institutions for the management and transfer of Electronic Medical Records (EMR) can be vulnerable to security and privacy threats. In addition, these centralized systems often lack interoperability and give patients limited or no access to their own EMRs. In this paper, we propose a novel distributed data sharing scheme that applies the security benefits of blockchain to handle these concerns. With blockchain, we incorporate smart contracts and a distributed storage system to alleviate the dependence on the record-generating institutions to manage and share patient records. To preserve privacy of patient records, we implement our smart contracts as a method to allow patients to verify attributes prior to granting access rights. Our proposed scheme also facilitates selective sharing of medical records among staff members that belong to different levels of a hierarchical institution. We provide extensive security, privacy, and evaluation analyses to show that our proposed scheme is both efficient and practical.
In 2017, another functioning electronic medical record-sharing scheme was presented to provide a secure solution using the blockchain @cite_1 . The system uses a cloud-based storage system to store the medical records. With centralized storage, the system becomes liable to a single point of failure. Similar to MedRec, this work builds over a private blockchain, which is a monitored blockchain where each node involved in maintaining consensus is known. Studies such as @cite_26 and @cite_22 present possible methods to evaluate private blockchains and potential concerns and vulnerabilities. They show that there is a trade-off between performance and security.
{ "cite_N": [ "@cite_26", "@cite_1", "@cite_22" ], "mid": [ "2963507231", "2964119695", "2604122668" ], "abstract": [ "Blockchain technologies are gaining massive momentum in the last few years. Blockchains are distributed ledgers that enable parties who do not fully trust each other to maintain a set of global states. The parties agree on the existence, values, and histories of the states. As the technology landscape is expanding rapidly, it is both important and challenging to have a firm grasp of what the core technologies have to offer, especially with respect to their data processing capabilities. In this paper, we first survey the state of the art, focusing on private blockchains (in which parties are authenticated). We analyze both in-production and research systems in four dimensions: distributed ledger, cryptography, consensus protocol, and smart contract. We then present BLOCKBENCH, a benchmarking framework for understanding performance of private blockchains against data processing workloads. We conduct a comprehensive evaluation of three major blockchain systems based on BLOCKBENCH, namely Ethereum, Parity, and Hyperledger Fabric. The results demonstrate several trade-offs in the design space, as well as big performance gaps between blockchain and database systems. Drawing from design principles of database systems, we discuss several research directions for bringing blockchain performance closer to the realm of databases.", "", "Blockchain technologies are taking the world by storm. Public blockchains, such as Bitcoin and Ethereum, enable secure peer-to-peer applications like crypto-currency or smart contracts. Their security and performance are well studied. This paper concerns recent private blockchain systems designed with stronger security (trust) assumption and performance requirement. These systems target and aim to disrupt applications which have so far been implemented on top of database systems, for example banking, finance and trading applications. Multiple platforms for private blockchains are being actively developed and fine tuned. However, there is a clear lack of a systematic framework with which different systems can be analyzed and compared against each other. Such a framework can be used to assess blockchains' viability as another distributed data processing platform, while helping developers to identify bottlenecks and accordingly improve their platforms. In this paper, we first describe BLOCKBENCH, the first evaluation framework for analyzing private blockchains. It serves as a fair means of comparison for different platforms and enables deeper understanding of different system design choices. Any private blockchain can be integrated to BLOCKBENCH via simple APIs and benchmarked against workloads that are based on real and synthetic smart contracts. BLOCKBENCH measures overall and component-wise performance in terms of throughput, latency, scalability and fault-tolerance. Next, we use BLOCKBENCH to conduct comprehensive evaluation of three major private blockchains: Ethereum, Parity and Hyperledger Fabric. The results demonstrate that these systems are still far from displacing current database systems in traditional data processing workloads. Furthermore, there are gaps in performance among the three systems which are attributed to the design choices at different layers of the blockchain's software stack. We have released BLOCKBENCH for public use." ] }
1904.11280
2941258974
Studies have identified various risk factors associated with the onset of stroke in an individual. Data mining techniques have been used to predict the occurrence of stroke based on these factors by using patients' medical records. However, there has been limited use of electronic health records to study the inter-dependency of different risk factors of stroke. In this paper, we perform an analysis of patients' electronic health records to identify the impact of risk factors on stroke prediction. We also provide benchmark performance of the state-of-art machine learning algorithms for predicting stroke using electronic health records.
There are several works in literature that use machine learning techniques on electronic health records to predict the probability of stroke occurrence. @cite_7 identified that there is a direct relationship between the total count of risk factors and the probability of stroke occurrence. A regression approach was recommended to statistically test the association between a risk factor and its effect. Hanifa and Raja @cite_8 achieved an improved accuracy for predicting stroke risk using radial basis function and polynomial functions applied in a non-linear support vector classification model. At the same time, studies have indicated that redundant attributes and or totally irrelevant attributes to a class should be identified and removed before the use of a classification algorithm @cite_2 . Systematic analysis of input features has been performed for modelling the response variable in areas other than healthcare -- color analysis of ground-based sky cloud images @cite_4 , weather recordings for rainfall detection @cite_0 etc.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_0", "@cite_2" ], "mid": [ "2040245781", "2738394521", "", "2963631414", "2032991527" ], "abstract": [ "Sky cloud imaging using ground-based Whole Sky Imagers (WSI) is a cost-effective means to understanding cloud cover and weather patterns. The accurate segmentation of clouds in these images is a challenging task, as clouds do not possess any clear structure. Several algorithms using different color models have been proposed in the literature. This paper presents a systematic approach for the selection of color spaces and components for optimal segmentation of sky cloud images. Using mainly principal component analysis (PCA) and fuzzy clustering for evaluation, we identify the most suitable color components for this task.", "Early diagnosis of stroke is essential for timely prevention and treatment. Investigation shows that measures extracted from various risk parameters carry valuable information for the prediction of stroke. This research work investigates the various physiological parameters that are used as risk factors for the prediction of stroke. Data was collected from International Stroke Trial database and was successfully trained and tested using Support Vector Machine (SVM). In this work, we have implemented SVM with different kernel functions and found that linear kernel gave an accuracy of 90 .", "", "Numerous weather parameters affect the occurrence and amount of rainfall. Therefore, it is important to study these parameters and their interdependency. In this paper, different weather and time-related variables - relative humidity, solar radiation, temperature, dew point, day-of-year, and time-of-day are analyzed systematically using Principal Component Analysis (PCA). We found that four principal components explain a cumulative variance of 85 . The first two principal components are applied to distinguish rain and no-rain scenarios as well. We conclude that all 7 variables have similar contribution towards rainfall detection.", "As a new concept that emerged in the middle of 1990's, data mining can help researchers gain both novel and deep insights and can facilitate unprecedented understanding of large biomedical datasets. Data mining can uncover new biomedical and healthcare knowledge for clinical and administrative decision making as well as generate scientific hypotheses from large experimental data, clinical databases, and or biomedical literature. This review first introduces data mining in general (e.g., the background, definition, and process of data mining), discusses the major differences between statistics and data mining and then speaks to the uniqueness of data mining in the biomedical and healthcare fields. A brief summarization of various data mining algorithms used for classification, clustering, and association as well as their respective advantages and drawbacks is also presented. Suggested guidelines on how to use data mining algorithms in each area of classification, clustering, and association are offered along with three examples of how data mining has been used in the healthcare industry. Given the successful application of data mining by health related organizations that has helped to predict health insurance fraud and under-diagnosed patients, and identify and classify at-risk people in terms of health with the goal of reducing healthcare cost, we introduce how data mining technologies (in each area of classification, clustering, and association) have been used for a multitude of purposes, including research in the biomedical and healthcare fields. A discussion of the technologies available to enable the prediction of healthcare costs (including length of hospital stay), disease diagnosis and prognosis, and the discovery of hidden biomedical and healthcare patterns from related databases is offered along with a discussion of the use of data mining to discover such relationships as those between health conditions and a disease, relationships among diseases, and relationships among drugs. The article concludes with a discussion of the problems that hamper the clinical use of data mining by health professionals." ] }
1904.11476
2969150231
Radar detects stable, long-range objects under variable weather and lighting conditions, making it a reliable and versatile sensor well suited for ego-motion estimation. In this work, we propose a radar-only odometry pipeline that is highly robust to radar artifacts (e.g., speckle noise and false positives) and requires only one input parameter. We demonstrate its ability to adapt across diverse settings, from urban UK to off-road Iceland, achieving a scan matching accuracy of approximately 5.20 cm and 0.0929 deg when using GPS as ground truth (compared to visual odometry’s 5.77 cm and 0.1032 deg). We present algorithms for key point extraction and data association, framing the latter as a graph matching optimization problem, and provide an in-depth system analysis.
While visual @cite_17 @cite_13 , lidar @cite_11 @cite_25 , and wheel @cite_4 odometry are well studied, radar odometry remains challenging. Due to its wide spreadbeam and long range, radar has lower resolution than lidar and is highly susceptible to interference from clutter, which generates speckle noise. Radar scans also contain false positives from multipath reflections and receiver saturation. As a result, radar odometry must be robust to measurement noise and false detections, and it must demonstrate high precision despite low-resolution data and slow update speeds. Odometry methods for radar can be categorized as indirect or direct. Indirect methods first extract salient keypoints, then associate those that correspond to the same location. Direct methods @cite_1 @cite_12 @cite_24 , which forego keypoint extraction and operate on minimally pre-processed sensor outputs, are discussed in @cite_19 @cite_22 and not in this paper. All methods assume the majority of observed objects are static.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_1", "@cite_17", "@cite_24", "@cite_19", "@cite_13", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "140813857", "2890714858", "2100254744", "2544243493", "2158662637", "", "2015996585", "2605103573", "78175635", "2277848489" ], "abstract": [ "", "In contrast to cameras, lidars, GPS, and proprioceptive sensors, radars are affordable and efficient systems that operate well under variable weather and lighting conditions, require no external infrastructure, and detect long-range objects. In this paper, we present a reliable and accurate radar-only motion estimation algorithm for mobile autonomous systems. Using a frequency-modulated continuous-wave (FMCW) scanning radar, we first extract landmarks with an algorithm that accounts for unwanted effects in radar returns. To estimate relative motion, we then perform scan matching by greedily adding point correspondences based on unary descriptors and pairwise compatibility scores. Our radar odometry results are robust under a variety of conditions, including those under which visual odometry and GPS INS fail.", "The results of a theoretical study of a simple radar edge detection correlation approach are presented. Several candidate edge correlation algorithms are described. Results of a theoretical evaluation of one particular candidate correlation algorithm are presented and verification of the edge correlation concept is provided through a presentation of correlation performance obtained using real radar imagery.", "Accurate localization of a vehicle is a fundamental challenge and one of the most important tasks of mobile robots. For autonomous navigation, motion tracking, and obstacle detection and avoidance, a robot must maintain knowledge of its position over time. Vision-based odometry is a robust technique utilized for this purpose. It allows a vehicle to localize itself robustly by using only a stream of images captured by a camera attached to the vehicle. This paper presents a review of state-of-the-art visual odometry (VO) and its types, approaches, applications, and challenges. VO is compared with the most common localization sensors and techniques, such as inertial navigation systems, global positioning systems, and laser sensors. Several areas for future research are also highlighted.", "The growing use of Doppler radars in the automotive field and the constantly increasing measurement accuracy open new possibilities for estimating the motion of the ego-vehicle. The following paper presents a robust and self-contained algorithm to instantly determine the velocity and yaw rate of the ego-vehicle. The algorithm is based on the received reflections (targets) of a single measurement cycle. It analyzes the distribution of their radial velocities over the azimuth angle. The algorithm does not require any preprocessing steps such as clustering or clutter suppression. Storage of history and data association is avoided. As an additional benefit, all targets are instantly labeled as stationary or non-stationary.", "", "Visual odometry (VO) is the process of estimating the egomotion of an agent (e.g., vehicle, human, and robot) using only the input of a single or If multiple cameras attached to it. Application domains include robotics, wearable computing, augmented reality, and automotive. The term VO was coined in 2004 by Nister in his landmark paper. The term was chosen for its similarity to wheel odometry, which incrementally estimates the motion of a vehicle by integrating the number of turns of its wheels over time. Likewise, VO operates by incrementally estimating the pose of the vehicle through examination of the changes that motion induces on the images of its onboard cameras. For VO to work effectively, there should be sufficient illumination in the environment and a static scene with enough texture to allow apparent motion to be extracted. Furthermore, consecutive frames should be captured by ensuring that they have sufficient scene overlap.", "This paper reports on a fast multiresolution scan matcher for local vehicle localization of self-driving cars. State-of-the-art approaches to vehicle localization rely on observing road surface reflectivity with a 3D light detection and ranging LIDAR scanner to achieve centimeter-level accuracy. However, these approaches can often fail when faced with adverse weather conditions that obscure the view of the road paint e.g. puddles and snowdrifts, poor road surface texture, or when road appearance degrades over time. We present a generic probabilistic method for localizing an autonomous vehicle equipped with a three-dimensional 3D LIDAR scanner. This proposed algorithm models the world as a mixture of several Gaussians, characterizing the z -height and reflectivity distribution of the environment-which we rasterize to facilitate fast and exact multiresolution inference. Results are shown on a collection of datasets totaling over 500 km of road data covering highway, rural, residential, and urban roadways, in which we demonstrate our method to be robust through heavy snowfall and roadway repavements.", "This paper is concerned with the Simultaneous Localization And Mapping (SLAM) problem using data obtained from a microwave radar sensor. The radar scanner is based on Frequency Modulated Continuous Wave (FMCW) technology. In order to meet the needs of radar image analysis complexity, a trajectoryoriented EKF-SLAM technique using data from a 360. field of view radar sensor has been developed. This process makes no landmark assumptions and avoids the data association problem. The method of egomotion estimation makes use of the Fourier-Mellin Transform for registering radar images in a sequence, from which the rotation and translation of the sensor motion can be estimated. In the context of the scan-matching SLAM, the use of the Fourier-Mellin Transform is original and provides an accurate and efficient way of computing the rigid transformation between consecutive scans. Experimental results on real-world data are presented.", "Here we propose a real-time method for low-drift odometry and mapping using range measurements from a 3D laser scanner moving in 6-DOF. The problem is hard because the range measurements are received at different times, and errors in motion estimation (especially without an external reference such as GPS) cause mis-registration of the resulting point cloud. To date, coherent 3D maps have been built by off-line batch methods, often using loop closure to correct for drift over time. Our method achieves both low-drift in motion estimation and low-computational complexity. The key idea that makes this level of performance possible is the division of the complex problem of Simultaneous Localization and Mapping, which seeks to optimize a large number of variables simultaneously, into two algorithms. One algorithm performs odometry at a high-frequency but at low fidelity to estimate velocity of the laser scanner. Although not necessary, if an IMU is available, it can provide a motion prior and mitigate for gross, high-frequency motion. A second algorithm runs at an order of magnitude lower frequency for fine matching and registration of the point cloud. Combination of the two algorithms allows map creation in real-time. Our method has been evaluated by indoor and outdoor experiments as well as the KITTI odometry benchmark. The results indicate that the proposed method can achieve accuracy comparable to the state of the art offline, batch methods." ] }
1904.11476
2969150231
Radar detects stable, long-range objects under variable weather and lighting conditions, making it a reliable and versatile sensor well suited for ego-motion estimation. In this work, we propose a radar-only odometry pipeline that is highly robust to radar artifacts (e.g., speckle noise and false positives) and requires only one input parameter. We demonstrate its ability to adapt across diverse settings, from urban UK to off-road Iceland, achieving a scan matching accuracy of approximately 5.20 cm and 0.0929 deg when using GPS as ground truth (compared to visual odometry’s 5.77 cm and 0.1032 deg). We present algorithms for key point extraction and data association, framing the latter as a graph matching optimization problem, and provide an in-depth system analysis.
The first step of indirect methods is keypoint extraction, for which the most popular approach is constant false-alarm rate (CFAR) detection @cite_10 , which distinguishes peaks from noise using sliding-window thresholding. CFAR and its variants generally require at least three tunable parameters, which are based on assumed noise characteristics and do not behave consistently across datasets (see @cite_22 for comparison). Some works leverage the knowledge that coherent structures make good keypoints by clustering or detecting the edges of bright regions in scans @cite_21 . Others elect to represent the surroundings using predetermined geometric primitives @cite_9 or models, like the normal distribution transform (NDT) @cite_5 . Vision-inspired works treat radar scans as images and extract features, like SIFT and FAST @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_9", "@cite_21", "@cite_5", "@cite_10" ], "mid": [ "2101027309", "2890714858", "1489258026", "2513508851", "1491157868", "" ], "abstract": [ "A vessel navigating in a critical environment such as an archipelago requires very accurate movement estimates. Intentional or unintentional jamming makes GPS unreliable as the only source of information and an additional independent supporting navigation system should be used. In this paper, we suggest estimating the vessel movements using a sequence of radar images from the preexisting body-fixed radar. Island landmarks in the radar scans are tracked between multiple scans using visual features. This provides information not only about the position of the vessel but also of its course and velocity. We present here a navigation framework that requires no additional hardware than the already existing naval radar sensor. Experiments show that visual radar features can be used to accurately estimate the vessel trajectory over an extensive data set.", "In contrast to cameras, lidars, GPS, and proprioceptive sensors, radars are affordable and efficient systems that operate well under variable weather and lighting conditions, require no external infrastructure, and detect long-range objects. In this paper, we present a reliable and accurate radar-only motion estimation algorithm for mobile autonomous systems. Using a frequency-modulated continuous-wave (FMCW) scanning radar, we first extract landmarks with an algorithm that accounts for unwanted effects in radar returns. To estimate relative motion, we then perform scan matching by greedily adding point correspondences based on unary descriptors and pairwise compatibility scores. Our radar odometry results are robust under a variety of conditions, including those under which visual odometry and GPS INS fail.", "A mobile robot exploring an unknown environment has no absolute frame of reference for its position, other than features it detects through its sensors. Using distinguishable landmarks is one possible approach, but it requires solving the object recognition problem. In particular, when the robot uses two-dimensional laser range scans for localization, it is difficult to accurately detect and localize landmarks in the environment (such as corners and occlusions) from the range scans. In this paper, we develop two new iterative algorithms to register a range scan to a previous scan so as to compute relative robot positions in an unknown environment, that avoid the above problems. The first algorithm is based on matching data points with tangent directions in two scans and minimizing a distance function in order to solve the displacement between the scans. The second algorithm establishes correspondences between points in the two scans and then solves the point-to-point least-squares problem to compute the relative pose of the two scans. Our methods work in curved environments and can handle partial occlusions by rejecting outliers.", "This paper compares three methods for extraction of interesting areas from radar gridmaps. Such interesting areas are useful for vehicle self-localization. The regarded methods are based on DBSCAN, MSER and Connected Components. Experimental data is collected from six test drives along the same route. The Connected Components algorithm performs best on this data with regard to the quality aspects: robustness, completeness, and computational cost.", "Grid map registration is an important field in mobile robotics. Applications in which multiple robots are involved benefit from multiple aligned grid maps as they provide an efficient exploration of the environment in parallel. In this paper, a normal distribution transform (NDT)-based approach for grid map registration is presented. For simultaneous mapping and localization approaches on laser data, the NDT is widely used to align new laser scans to reference scans. The original grid quantization-based NDT results in good registration performances but has poor convergence properties due to discontinuities of the optimization function and absolute grid resolution. This paper shows that clustering techniques overcome disadvantages of the original NDT by significantly improving the convergence basin for aligning grid maps. A multi-scale clustering method results in an improved registration performance which is shown on real world experiments on radar data.", "" ] }
1904.11476
2969150231
Radar detects stable, long-range objects under variable weather and lighting conditions, making it a reliable and versatile sensor well suited for ego-motion estimation. In this work, we propose a radar-only odometry pipeline that is highly robust to radar artifacts (e.g., speckle noise and false positives) and requires only one input parameter. We demonstrate its ability to adapt across diverse settings, from urban UK to off-road Iceland, achieving a scan matching accuracy of approximately 5.20 cm and 0.0929 deg when using GPS as ground truth (compared to visual odometry’s 5.77 cm and 0.1032 deg). We present algorithms for key point extraction and data association, framing the latter as a graph matching optimization problem, and provide an in-depth system analysis.
These keypoints must then undergo data association, also known as scan matching @cite_3 @cite_9 in robotics. The most common technique is iterative closest point (ICP) @cite_2 , which iteratively matches points using naive methods until the alignment between keypoint sets is sufficiently close @cite_9 @cite_0 . ICP relies on a good estimate of the relative displacement (i.e., motion prior) between scans. Other data association techniques search for motion parameters that optimize some objective function, like maximizing similarity (e.g., overlap of Gaussian distributions for NDT @cite_8 ) or minimizing distance (e.g., cluster edge difference @cite_1 ) between keypoint sets. Two further examples of objective functions characterize map quality @cite_6 and radar scan distortion @cite_7 in terms of motion. Feature-based approaches associate keypoints using descriptors, like BASD @cite_18 and SURF @cite_26 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_7", "@cite_8", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_2" ], "mid": [ "2566680697", "2101027309", "1988918158", "2552263821", "1489258026", "2100254744", "2282348759", "2125237557", "1836505632", "2049981393" ], "abstract": [ "On the way to achieving higher degrees of autonomy for vehicles in complicated, ever changing scenarios, the localization problem poses a very important role. Especially the Simultaneous Localization and Mapping (SLAM) problem has been studied greatly in the past. For an autonomous system in the real world, we present a very cost-efficient, robust and very precise localization approach based on GraphSLAM and graph optimization using radar sensors. We are able to prove on a dynamically changing parking lot layout that both mapping and localization accuracy are very high. To evaluate the performance of the mapping algorithm, a highly accurate ground truth map generated from a total station was used. Localization results are compared to a high precision DGPS INS system. Utilizing these methods, we can show the strong performance of our algorithm.", "A vessel navigating in a critical environment such as an archipelago requires very accurate movement estimates. Intentional or unintentional jamming makes GPS unreliable as the only source of information and an additional independent supporting navigation system should be used. In this paper, we suggest estimating the vessel movements using a sequence of radar images from the preexisting body-fixed radar. Island landmarks in the radar scans are tracked between multiple scans using visual features. This provides information not only about the position of the vessel but also of its course and velocity. We present here a navigation framework that requires no additional hardware than the already existing naval radar sensor. Experiments show that visual radar features can be used to accurately estimate the vessel trajectory over an extensive data set.", "Rotating radar sensors are perception systems rarely used in mobile robotics. This paper is concerned with the use of a mobile ground-based panoramic radar sensor which is able to deliver both distance and velocity of multiple targets in its surrounding. The consequence of using such a sensor in high speed robotics is the appearance of both geometric and Doppler velocity distortions in the collected data. These effects are, in the majority of studies, ignored or considered as noise and then corrected based on proprioceptive sensors or localization systems. Our purpose is to study and use data distortion and Doppler effect as sources of information in order to estimate the vehicle's displacement. The linear and angular velocities of the mobile robot are estimated by analyzing the distortion of the measurements provided by the panoramic Frequency Modulated Continuous Wave (FMCW) radar, called IMPALA. Without the use of any proprioceptive sensor, these estimates are then used to build the trajectory of the vehicle and the radar map of outdoor environments. In this paper, radar-only localization and mapping results are presented for a ground vehicle moving at high speed.", "Abstract For automotive applications, an accurate estimation of the ego-motion is required to make advanced driver assistant systems work reliably. The proposed framework for ego-motion estimation involves two components: The first component is the spatial registration of consecutive scans. In this paper, the reference scan is represented by a sparse Gaussian Mixture model. This structural representation is improved by incorporating clustering algorithms. For the spatial matching of consecutive scans, a normal distributions transform-based optimization is used. The second component is a likelihood model for the Doppler velocity. Using a hypothesis for the ego-motion state, the expected radial velocity can be calculated and compared to the actual measured Doppler velocity. The ego-motion estimation framework of this paper is a joint spatial and Doppler-based optimization function which shows reliable performance on real world data and compared to state-of-the-art algorithms.", "A mobile robot exploring an unknown environment has no absolute frame of reference for its position, other than features it detects through its sensors. Using distinguishable landmarks is one possible approach, but it requires solving the object recognition problem. In particular, when the robot uses two-dimensional laser range scans for localization, it is difficult to accurately detect and localize landmarks in the environment (such as corners and occlusions) from the range scans. In this paper, we develop two new iterative algorithms to register a range scan to a previous scan so as to compute relative robot positions in an unknown environment, that avoid the above problems. The first algorithm is based on matching data points with tangent directions in two scans and minimizing a distance function in order to solve the displacement between the scans. The second algorithm establishes correspondences between points in the two scans and then solves the point-to-point least-squares problem to compute the relative pose of the two scans. Our methods work in curved environments and can handle partial occlusions by rejecting outliers.", "The results of a theoretical study of a simple radar edge detection correlation approach are presented. Several candidate edge correlation algorithms are described. Results of a theoretical evaluation of one particular candidate correlation algorithm are presented and verification of the edge correlation concept is provided through a presentation of correlation performance obtained using real radar imagery.", "", "Simultaneous localization and mapping (SLAM) builds maps of a priori unknown environments. Whilst this key mobile robotic competency continues to receive substantial attention, less attention has been paid to assessing the quality of the resulting maps. This paper proposes a way to quantify the intrinsic quality of point-cloud maps built from a stream of range bearing measurements. It does so by considering both the temporal and spatial distribution of the points within the map. One of the causes of unsatisfactory maps is the execution of unmodelled or poorly sensed vehicle manoeuvres. In this paper we show that by maximizing the quality of the map as a function of a motion parameterization, the vehicle motion can be recovered while correcting the map at the same time. In contrast to typical scan matching techniques, we do not rely on segmentation of the measurement stream into two separate \"scans\"; Instead we treat the measurement sequence as a continuous signal. We illustrate the efficacy of this approach by processing range data from a 77 GHz millimeter wave radar that completes 2 rotations per second. We show that despite this acquisition speed being commensurate with vehicle rotation rates, we are able to extract the underlying vehicle motion and yield crisp, well aligned point clouds", "Ego-motion estimation is a key issue in intelligent vehicles and important to moving objects tracking. In this paper, after a simple overview of existed method, we present a tangent based hybrid real-time ego-motion estimation algorithm based on laser radar, which is composed of iterative tangent weighted closest point (ITCP) and Hough transform based tangent angle histogram (HTAH) algorithms to overcome problems with past methods, such as local minimum, aperture-like, and high computation problem, etc. This algorithm has been tested on both synthetic data and real range data in outdoor environment. Experimental results demonstrate its high accuracy, low computation, wide applicability and high robustness to aperture-like problem, occlusion and noises.", "The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >" ] }
1904.11476
2969150231
Radar detects stable, long-range objects under variable weather and lighting conditions, making it a reliable and versatile sensor well suited for ego-motion estimation. In this work, we propose a radar-only odometry pipeline that is highly robust to radar artifacts (e.g., speckle noise and false positives) and requires only one input parameter. We demonstrate its ability to adapt across diverse settings, from urban UK to off-road Iceland, achieving a scan matching accuracy of approximately 5.20 cm and 0.0929 deg when using GPS as ground truth (compared to visual odometry’s 5.77 cm and 0.1032 deg). We present algorithms for key point extraction and data association, framing the latter as a graph matching optimization problem, and provide an in-depth system analysis.
Many of the methods discussed do not generalize well due to the high levels of noise in radar scans. Adequate performance often requires fine tuning, knowledge, restrictive assumptions, or outlier detection. Several of the works rely heavily on other sensors for robustness, which compromises performance under conditions that cause these sensors to fail, or use simultaneous localization and mapping (SLAM), which is accompanied by overhead costs and model-reliant motion filters @cite_16 @cite_26 .
{ "cite_N": [ "@cite_16", "@cite_26" ], "mid": [ "2507412359", "2101027309" ], "abstract": [ "Significant advances have been achieved in mobile robot localization and mapping in dynamic environments, however these are mostly incapable of dealing with the physical properties of automotive radar sensors. In this paper we present an accurate and robust solution to this problem, by introducing a memory efficient cluster map representation. Our approach is validated by experiments that took place on a public parking space with pedestrians, moving cars, as well as different parking configurations to provide a challenging dynamic environment. The results prove its ability to reproducibly localize our vehicle within an error margin of below 1 with respect to ground truth using only point based radar targets. A decay process enables our map representation to support local updates.", "A vessel navigating in a critical environment such as an archipelago requires very accurate movement estimates. Intentional or unintentional jamming makes GPS unreliable as the only source of information and an additional independent supporting navigation system should be used. In this paper, we suggest estimating the vessel movements using a sequence of radar images from the preexisting body-fixed radar. Island landmarks in the radar scans are tracked between multiple scans using visual features. This provides information not only about the position of the vessel but also of its course and velocity. We present here a navigation framework that requires no additional hardware than the already existing naval radar sensor. Experiments show that visual radar features can be used to accurately estimate the vessel trajectory over an extensive data set." ] }
1904.11268
2941889845
Cache timing attacks use shared caches in multi-core processors as side channels to extract information from victim processes. These attacks are particularly dangerous in cloud infrastructures, in which the deployed countermeasures cause collateral effects in terms of performance loss and increase in energy consumption. We propose to monitor the victim process using an independent monitoring (detector) process, that continuously measures selected Performance Monitoring Counters (PMC) to detect the presence of an attack. Ad-hoc countermeasures can be applied only when such a risky situation arises. In our case, the victim process is the AES encryption algorithm and the attack is performed by means of random encryption requests. We demonstrate that PMCs are a feasible tool to detect the attack and that sampling PMCs at high frequencies is worse than sampling at lower frequencies in terms of detection capabilities, particularly when the attack is fragmented in time to try to be hidden from detection.
Detailed surveys on microarchitectural timing attacks in general @cite_0 , @cite_9 and cache timing attacks in particular @cite_11 can be found in the literature. @cite_2 includes a systematic evaluation of transient execution attacks.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_2", "@cite_11" ], "mid": [ "2562036180", "2595350342", "2900479912", "2768516003" ], "abstract": [ "Microarchitectural timing channels expose hidden hardware states though timing. We survey recent attacks that exploit microarchitectural features in shared hardware, especially as they are relevant for cloud computing. We classify types of attacks according to a taxonomy of the shared resources leveraged for such attacks. Moreover, we take a detailed look at attacks used against shared caches. We survey existing countermeasures. We finally discuss trends in attacks, challenges to combating them, and future directions, especially with respect to hardware support.", "A timing channel is a communication channel that can transfer information to a receiver decoder by modulating the timing behavior of an entity. Examples of this entity include the interpacket delays of a packet stream, the reordering packets in a packet stream, or the resource access time of a cryptographic module. Advances in the information and coding theory and the availability of high-performance computing systems interconnected by high-speed networks have spurred interest in and development of various types of timing channels. With the emergence of complex timing channels, novel detection and prevention techniques are also being developed to counter them. In this article, we provide a detailed survey of timing channels broadly categorized into network timing channel, in which communicating entities are connected by a network, and in-system timing channel, in which the communicating entities are within a computing system. This survey builds on the last comprehensive survey by [2007] and considers all three canonical applications of timing channels, namely, covert communication, timing side channel, and network flow watermarking. We survey the theoretical foundations, the implementation, and the various detection and prevention techniques that have been reported in literature. Based on the analysis of the current literature, we discuss potential future research directions both in the design and application of timing channels and their detection and prevention techniques.", "Research on transient execution attacks including Spectre and Meltdown showed that exception or branch misprediction events might leave secret-dependent traces in the CPU's microarchitectural state. This observation led to a proliferation of new Spectre and Meltdown attack variants and even more ad-hoc defenses (e.g., microcode and software patches). Both the industry and academia are now focusing on finding effective defenses for known issues. However, we only have limited insight on residual attack surface and the completeness of the proposed defenses. In this paper, we present a systematization of transient execution attacks. Our systematization uncovers 6 (new) transient execution attacks that have been overlooked and not been investigated so far: 2 new exploitable Meltdown effects: Meltdown-PK (Protection Key Bypass) on Intel, and Meltdown-BND (Bounds Check Bypass) on Intel and AMD; and 4 new Spectre mistraining strategies. We evaluate the attacks in our classification tree through proof-of-concept implementations on 3 major CPU vendors (Intel, AMD, ARM). Our systematization yields a more complete picture of the attack surface and allows for a more systematic evaluation of defenses. Through this systematic evaluation, we discover that most defenses, including deployed ones, cannot fully mitigate all attack variants.", "With the increasing proliferation of Internet-of-Things (IoT) in our daily lives, security and trustworthiness are key considerations in designing computing devices. A vast majority of IoT devices use shared caches for improved performance. Unfortunately, the data sharing introduces the vulnerability in these systems. Side-channel attacks in shared caches have been explored for over a decade. Existing approaches utilize side-channel (non-functional) behaviors such as time, power, and electromagnetic radiation to attack encryption schemes. In this paper, we survey the widely used target encryption algorithms, the common attack techniques, and recent attacks that exploit the features of cache. In particular, we focus on the cache timing attacks against the cloud computing and embedded systems. We also survey existing countermeasures at different abstraction levels." ] }
1904.11268
2941889845
Cache timing attacks use shared caches in multi-core processors as side channels to extract information from victim processes. These attacks are particularly dangerous in cloud infrastructures, in which the deployed countermeasures cause collateral effects in terms of performance loss and increase in energy consumption. We propose to monitor the victim process using an independent monitoring (detector) process, that continuously measures selected Performance Monitoring Counters (PMC) to detect the presence of an attack. Ad-hoc countermeasures can be applied only when such a risky situation arises. In our case, the victim process is the AES encryption algorithm and the attack is performed by means of random encryption requests. We demonstrate that PMCs are a feasible tool to detect the attack and that sampling PMCs at high frequencies is worse than sampling at lower frequencies in terms of detection capabilities, particularly when the attack is fragmented in time to try to be hidden from detection.
Recently, Performance Monitoring Counters have been used to detect the attack. @cite_7 monitor both the victim and the attacker, while CloudRadar @cite_4 monitors all the virtual machines running in the system. CacheShield @cite_12 only monitors the victim process to detect attacks on both AES and RSA algorithms. None of them considered trying to hide the attack by dividing it into small pieces distributed in time. Our approach is similar to CacheShield @cite_12 in terms of functionality, but we perform a more detailed study of how the specific timing of the attack affects the detection capability.
{ "cite_N": [ "@cite_4", "@cite_12", "@cite_7" ], "mid": [ "2507765405", "2792326895", "2522718524" ], "abstract": [ "We present CloudRadar, a system to detect, and hence mitigate, cache-based side-channel attacks in multi-tenant cloud systems. CloudRadar operates by correlating two events: first, it exploits signature-based detection to identify when the protected virtual machine (VM) executes a cryptographic application; at the same time, it uses anomaly-based detection techniques to monitor the co-located VMs to identify abnormal cache behaviors that are typical during cache-based side-channel attacks. We show that correlation in the occurrence of these two events offer strong evidence of side-channel attacks. Compared to other work on side-channel defenses, CloudRadar has the following advantages: first, CloudRadar focuses on the root causes of cache-based side-channel attacks and hence is hard to evade using metamorphic attack code, while maintaining a low false positive rate. Second, CloudRadar is designed as a lightweight patch to existing cloud systems, which does not require new hardware support, or any hypervisor, operating system, application modifications. Third, CloudRadar provides real-time protection and can detect side-channel attacks within the order of milliseconds. We demonstrate a prototype implementation of CloudRadar in the OpenStack cloud framework. Our evaluation suggests CloudRadar achieves negligible performance overhead with high detection accuracy.", "Microarchitectural attacks pose a great threat to any code running in parallel to other untrusted processes. Especially in public clouds, where system resources such as caches are shared across several tenants, microarchitectural attacks remain an unsolved problem. Cache attacks rely on evictions by the spy process, which alter the execution behavior of the victim process. Similarly, all attacks exploiting shared resource access will influence these resources, thereby influencing the process they are targeting. We show that hardware performance events reveal the presence of such attacks. Based on this observation, we propose CacheShield, a tool to protect legacy code by self-monitoring its execution and detecting the presence of microarchitectural attacks. CacheShield can be run by users and does not require alteration of the OS or hypervisor, while previously proposed software-based countermeasures require cooperation from the hypervisor. Unlike methods that try to detect malicious processes, our approach is lean, as only a fraction of the system needs to be monitored. It also integrates well into today's cloud infrastructure, as concerned users can opt to use CacheShield without support from the cloud service provider. Our results show that CacheShield detects attacks fast, with high reliability, and with few false positives, even in the presence of strong noise.", "Graphical abstractDisplay Omitted HighlightsThree methods for detecting a class of cache-based side-channel attacks are proposed.A new tool (quickhpc) for probing hardware performance counters at a higher temporal resolution than the existing tools is presented.The first method is based on correlation, the other two use machine learning techniques and reach a minimum F-score of 0.93.A smarter attack is devised that is capable of circumventing the first method. In this paper we analyze three methods to detect cache-based side-channel attacks in real time, preventing or limiting the amount of leaked information. Two of the three methods are based on machine learning techniques and all the three of them can successfully detect an attack in about one fifth of the time required to complete it. We could not experience the presence of false positives in our test environment and the overhead caused by the detection systems is negligible. We also analyze how the detection systems behave with a modified version of one of the spy processes. With some optimization we are confident these systems can be used in real world scenarios." ] }
1904.11263
2940563789
Although double-precision floating-point arithmetic currently dominates high-performance computing, there is increasing interest in smaller and simpler arithmetic types. The main reasons are potential improvements in energy efficiency and memory footprint and bandwidth. However, simply switching to lower-precision types typically results in increased numerical errors. We investigate approaches to improving the accuracy of lower-precision arithmetic types, using examples in an important domain for numerical computation in neuroscience: the solution of Ordinary Differential Equations (ODEs). The Izhikevich neuron model is used to demonstrate that rounding has an important role in producing accurate spike timings from explicit ODE solution algorithms. In particular, stochastic rounding consistently results in smaller errors compared to single-precision floatingpoint and fixed-point arithmetic with round-tonearest across a range of neuron behaviours and ODE solvers. A computationally much cheaper alternative is also investigated, inspired by the concept of dither that is a widely understood mechanism for providing resolution below the LSB in digital signal processing. These results will have implications for the solution of ODEs in other subject areas, and should also be directly relevant to the huge range of practical problems that are represented by Partial Differential Equations (PDEs).
The most recent work that explores fixed-point ODE solvers on SpiNNaker @cite_18 was published in the middle of our current investigation and demonstrates a few important issues with the default GCC s16.15 fixed-point arithmetic when used in the Izhikevich neuron model. Therefore, we address some of the conclusions of this study. The authors tested the current sPyNNaker software framework @cite_16 for simulating the Izhikevich neuron and then demonstrated a method for comparing the network statistics of two forward Euler solvers at smaller timesteps to one another using a custom fixed-point type of s8.23 for one of the constants. Matching network behaviour is a valuable development in this area but we do have some comments on their methodology and note a few missing details in their study:
{ "cite_N": [ "@cite_18", "@cite_16" ], "mid": [ "2903118381", "2773087045" ], "abstract": [ "The reproduction and replication of scientific results is an indispensable aspect of good scientific practice, enabling previous studies to be built upon and increasing our level of confidence in them. However, reproducibility and replicability are not sufficient: an incorrect result will be accurately reproduced if the same incorrect methods are used. For the field of simulations of complex neural networks, the causes of incorrect results vary from insufficient model implementations and data analysis methods, deficiencies in workmanship (e.g., simulation planning, setup, and execution) to errors induced by hardware constraints (e.g., limitations in numerical precision). In order to build credibility, methods such as verification and validation have been developed, but they are not yet well established in the field of neural network modeling and simulation, partly due to ambiguity concerning the terminology. In this manuscript, we propose a terminology for model verification and validation in the field of neural network modeling and simulation. We outline a rigorous workflow derived from model verification and validation methodologies for increasing model credibility when it is not possible to validate against experimental data. We compare a published minimal spiking network model capable of exhibiting the development of polychronous groups, to its reproduction on the SpiNNaker neuromorphic system, where we consider the dynamics of several selected network states. As a result, by following a formalized process, we show that numerical accuracy is critically important, and even small deviations in the dynamics of individual neurons are expressed in the dynamics at network level.", "Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision." ] }
1904.11263
2940563789
Although double-precision floating-point arithmetic currently dominates high-performance computing, there is increasing interest in smaller and simpler arithmetic types. The main reasons are potential improvements in energy efficiency and memory footprint and bandwidth. However, simply switching to lower-precision types typically results in increased numerical errors. We investigate approaches to improving the accuracy of lower-precision arithmetic types, using examples in an important domain for numerical computation in neuroscience: the solution of Ordinary Differential Equations (ODEs). The Izhikevich neuron model is used to demonstrate that rounding has an important role in producing accurate spike timings from explicit ODE solution algorithms. In particular, stochastic rounding consistently results in smaller errors compared to single-precision floatingpoint and fixed-point arithmetic with round-tonearest across a range of neuron behaviours and ODE solvers. A computationally much cheaper alternative is also investigated, inspired by the concept of dither that is a widely understood mechanism for providing resolution below the LSB in digital signal processing. These results will have implications for the solution of ODEs in other subject areas, and should also be directly relevant to the huge range of practical problems that are represented by Partial Differential Equations (PDEs).
In one of the earliest investigations of SR @cite_1 the authors investigate the effects of probabilistic rounding in backpropagation algorithms. Three different applications are shown with varying degrees of precision in the internal calculations of the backprogatation algorithms. It is demonstrated that when @math bits are used, training of the neural network starts to fail due to weight updates being too small to be captured by limited precision arithmetic, resulting in underflow in most of the updates. To allevate this, the authors apply probabilistic rounding in some of the arithmetic operations inside the backpropagation algorithm and show that the neural network can then perform well for word widths as small as 4 bits. The authors conclude that with probabilistic rounding, the 12-bit version of their system performs as well as a single precision floating-point version.
{ "cite_N": [ "@cite_1" ], "mid": [ "2006383931" ], "abstract": [ "Abstract A key question in the design of specialized hardware for simulation of neural networks is whether fixed-point arithmetic of limited precision can be used with existing learning algorithms. Several studies of the backpropagation algorithm report a collapse of learning ability at around 12 to 16 bits of precision, depending on the details of the problem. In this paper, we investigate the effects of limited precision in the Cascade Correlation learning algorithm. As a general result, we introduce techniques for dynamic rescaling and probabilistic rounding that facilitate learning by gradient descent down to 6 bits of precision." ] }
1904.11263
2940563789
Although double-precision floating-point arithmetic currently dominates high-performance computing, there is increasing interest in smaller and simpler arithmetic types. The main reasons are potential improvements in energy efficiency and memory footprint and bandwidth. However, simply switching to lower-precision types typically results in increased numerical errors. We investigate approaches to improving the accuracy of lower-precision arithmetic types, using examples in an important domain for numerical computation in neuroscience: the solution of Ordinary Differential Equations (ODEs). The Izhikevich neuron model is used to demonstrate that rounding has an important role in producing accurate spike timings from explicit ODE solution algorithms. In particular, stochastic rounding consistently results in smaller errors compared to single-precision floatingpoint and fixed-point arithmetic with round-tonearest across a range of neuron behaviours and ODE solvers. A computationally much cheaper alternative is also investigated, inspired by the concept of dither that is a widely understood mechanism for providing resolution below the LSB in digital signal processing. These results will have implications for the solution of ODEs in other subject areas, and should also be directly relevant to the huge range of practical problems that are represented by Partial Differential Equations (PDEs).
A recent paper from IBM @cite_31 also explores the use of the 8-bit floating-point type with mixed-precision in various parts of the architecture and stochastic rounding . The authors demonstrate similar performance to the standard 32-bit float type in training neural networks.
{ "cite_N": [ "@cite_31" ], "mid": [ "2889797931" ], "abstract": [ "The state-of-the-art hardware platforms for training deep neural networks are moving from traditional single precision (32-bit) computations towards 16 bits of precision - in large part due to the high energy efficiency and smaller bit storage associated with using reduced-precision representations. However, unlike inference, training with numbers represented with less than 16 bits has been challenging due to the need to maintain fidelity of the gradient computations during back-propagation. Here we demonstrate, for the first time, the successful training of deep neural networks using 8-bit floating point numbers while fully maintaining the accuracy on a spectrum of deep learning models and datasets. In addition to reducing the data and computation precision to 8 bits, we also successfully reduce the arithmetic precision for additions (used in partial product accumulation and weight updates) from 32 bits to 16 bits through the introduction of a number of key ideas including chunk-based accumulation and floating point stochastic rounding. The use of these novel techniques lays the foundation for a new generation of hardware training platforms with the potential for 2-4 times improved throughput over today's systems." ] }
1904.11251
2941236977
Image captioning has received significant attention with remarkable improvements in recent advances. Nevertheless, images in the wild encapsulate rich knowledge and cannot be sufficiently described with models built on image-caption pairs containing only in-domain objects. In this paper, we propose to address the problem by augmenting standard deep captioning architectures with object learners. Specifically, we present Long Short-Term Memory with Pointing (LSTM-P) --- a new architecture that facilitates vocabulary expansion and produces novel objects via pointing mechanism. Technically, object learners are initially pre-trained on available object recognition data. Pointing in LSTM-P then balances the probability between generating a word through LSTM and copying a word from the recognized objects at each time step in decoder stage. Furthermore, our captioning encourages global coverage of objects in the sentence. Extensive experiments are conducted on both held-out COCO image captioning and ImageNet datasets for describing novel objects, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, we obtain an average of 60.9 in F1 score on held-out COCO dataset.
Inspired from deep learning @cite_16 in computer vision and sequence modeling @cite_32 in Natural Language Processing, modern image captioning methods @cite_13 @cite_21 @cite_17 @cite_2 @cite_24 @cite_28 @cite_15 mainly exploit sequence learning models to produce sentences with flexible syntactical structures. For example, @cite_17 presents an end-to-end CNN plus RNN architecture which capitalizes on LSTM to generate sentences word-by-word. @cite_2 further extends @cite_17 by integrating soft hard attention mechanism to automatically focus on salient regions within images when producing corresponding words. Moreover, instead of calculating visual attention over image regions at each time step of decoding stage, @cite_0 devises an adaptive attention mechanism in encoder-decoder architecture to additionally decide when to rely on visual signals or language model. Recently, @cite_12 @cite_28 verify the effectiveness of injecting semantic attributes into CNN plus RNN model for image captioning. Moreover, @cite_15 utilize the semantic attention measured over attributes to boost image captioning. Most recently, @cite_29 proposes a novel attention based captioning model which exploits object-level attention to enhance sentence generation via bottom-up and top-down attention mechanism.
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_21", "@cite_32", "@cite_24", "@cite_0", "@cite_2", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2552161745", "2951590222", "2963084599", "2130942839", "2890531016", "2952469094", "1514535095", "2953022248", "2163605009", "2951183276", "2404394533", "1895577753" ], "abstract": [ "Automatically describing an image with a natural language has been an emerging challenge in both fields of computer vision and natural language processing. In this paper, we present Long Short-Term Memory with Attributes (LSTM-A) - a novel architecture that integrates attributes into the successful Convolutional Neural Networks (CNNs) plus Recurrent Neural Networks (RNNs) image captioning framework, by training them in an end-to-end manner. Particularly, the learning of attributes is strengthened by integrating inter-attribute correlations into Multiple Instance Learning (MIL). To incorporate attributes into captioning, we construct variants of architectures by feeding image representations and attributes into RNNs in different ways to explore the mutual but also fuzzy relationship between them. Extensive experiments are conducted on COCO image captioning dataset and our framework shows clear improvements when compared to state-of-the-art deep models. More remarkably, we obtain METEOR CIDEr-D of 25.5 100.2 on testing data of widely used and publicly available splits in [10] when extracting image representations by GoogleNet and achieve superior performance on COCO captioning Leaderboard.", "Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr SPICE BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.", "Recently it has been shown that policy-gradient methods for reinforcement learning can be utilized to train deep end-to-end systems directly on non-differentiable metrics for the task at hand. In this paper we consider the problem of optimizing image captioning systems using reinforcement learning, and show that by carefully optimizing our systems using the test metrics of the MSCOCO task, significant gains in performance can be realized. Our systems are built using a new optimization approach that we call self-critical sequence training (SCST). SCST is a form of the popular REINFORCE algorithm that, rather than estimating a baseline to normalize the rewards and reduce variance, utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. Using this approach, estimating the reward signal (as actor-critic methods must do) and estimating normalization (as REINFORCE algorithms typically do) is avoided, while at the same time harmonizing the model with respect to its test-time inference procedure. Empirically we find that directly optimizing the CIDEr metric with SCST and greedy decoding at test-time is highly effective. Our results on the MSCOCO evaluation sever establish a new state-of-the-art on the task, improving the best result in terms of CIDEr from 104.9 to 114.7.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1 to 128.7 on COCO testing set.", "Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as \"the\" and \"of\". Other words that may seem visual can often be predicted reliably just from the language model e.g., \"sign\" after \"behind a red stop\" or \"phone\" following \"talking on a cell\". In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.", "Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.", "Much of the recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. We propose here a method of incorporating high-level concepts into the very successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art performance in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. In doing so we provide an analysis of the value of high level semantic information in V2L problems.", "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art." ] }
1904.11251
2941236977
Image captioning has received significant attention with remarkable improvements in recent advances. Nevertheless, images in the wild encapsulate rich knowledge and cannot be sufficiently described with models built on image-caption pairs containing only in-domain objects. In this paper, we propose to address the problem by augmenting standard deep captioning architectures with object learners. Specifically, we present Long Short-Term Memory with Pointing (LSTM-P) --- a new architecture that facilitates vocabulary expansion and produces novel objects via pointing mechanism. Technically, object learners are initially pre-trained on available object recognition data. Pointing in LSTM-P then balances the probability between generating a word through LSTM and copying a word from the recognized objects at each time step in decoder stage. Furthermore, our captioning encourages global coverage of objects in the sentence. Extensive experiments are conducted on both held-out COCO image captioning and ImageNet datasets for describing novel objects, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, we obtain an average of 60.9 in F1 score on held-out COCO dataset.
In short, our approach focuses on the latter scenario, that leverages object recognition data for novel object captioning. Similar to previous approaches @cite_1 @cite_19 , LSTM-P augments the standard RNN-based language model with the object learners pre-trained on object recognition data. The novelty is on the exploitation of pointing mechanism for dynamically accommodating word generation via RNN-based language model and word copying from the learnt objects. In particular, we utilize the pointing mechanism to elegantly point when to copy the novel objects to target sentence, targeting for balancing the influence between copying mechanism and standard word-by-word sentence generation conditioned on the contexts. Moreover, the measure of sentence-level coverage is adopted as an additional training target to encourage the global coverage of objects in the sentence.
{ "cite_N": [ "@cite_19", "@cite_1" ], "mid": [ "2743573407", "2765845685" ], "abstract": [ "Image captioning often requires a large set of training image-sentence pairs. In practice, however, acquiring sufficient training pairs is always expensive, making the recent captioning models limited in their ability to describe objects outside of training corpora (i.e., novel objects). In this paper, we present Long Short-Term Memory with Copying Mechanism (LSTM-C) --- a new architecture that incorporates copying into the Convolutional Neural Networks (CNN) plus Recurrent Neural Networks (RNN) image captioning framework, for describing novel objects in captions. Specifically, freely available object recognition datasets are leveraged to develop classifiers for novel objects. Our LSTM-C then nicely integrates the standard word-by-word sentence generation by a decoder RNN with copying mechanism which may instead select words from novel objects at proper places in the output sentence. Extensive experiments are conducted on both MSCOCO image captioning and ImageNet datasets, demonstrating the ability of our proposed LSTM-C architecture to describe novel objects. Furthermore, superior results are reported when compared to state-of-the-art deep models.", "Images in the wild encapsulate rich knowledge about varied abstract concepts and cannot be sufficiently described with models built only using image-caption pairs containing selected objects. We propose to handle such a task with the guidance of a knowledge base that incorporate many abstract concepts. Our method is a two-step process where we first build a multi-entity-label image recognition model to predict abstract concepts as image labels and then leverage them in the second step as an external semantic attention and constrained inference in the caption generation model for describing images that depict unseen novel objects. Evaluations show that our models outperform most of the prior work for out-of-domain captioning on MSCOCO and are useful for integration of knowledge and vision in general." ] }
1904.11245
2942491430
Rendering synthetic data (e.g., 3D CAD-rendered images) to generate annotations for learning deep models in vision tasks has attracted increasing attention in recent years. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. To address this issue, recent progress in cross-domain recognition has featured the Mean Teacher, which directly simulates unsupervised domain adaptation as semi-supervised learning. The domain gap is thus naturally bridged with consistency regularization in a teacher-student scheme. In this work, we advance this Mean Teacher paradigm to be applicable for cross-domain detection. Specifically, we present Mean Teacher with Object Relations (MTOR) that novelly remolds Mean Teacher under the backbone of Faster R-CNN by integrating the object relations into the measure of consistency cost between teacher and student modules. Technically, MTOR firstly learns relational graphs that capture similarities between pairs of regions for teacher and student respectively. The whole architecture is then optimized with three consistency regularizations: 1) region-level consistency to align the region-level predictions between teacher and student, 2) inter-graph consistency for matching the graph structures between teacher and student, and 3) intra-graph consistency to enhance the similarity between regions of same class within the graph of student. Extensive experiments are conducted on the transfers across Cityscapes, Foggy Cityscapes, and SIM10k, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, we obtain a new record of single model: 22.8 of mAP on Syn2Real detection dataset.
Recent years have witnessed remarkable progress in object detection with deep learning. R-CNN @cite_31 is one of the early works that exploits a two-stage paradigm for object detection by firstly generating region proposals with selective search and then classifying the proposals into foreground classes background. Later Fast R-CNN @cite_44 extends such paradigm by sharing convolution features across region proposals to significantly speed up the detection process. Faster R-CNN @cite_1 advances Fast R-CNN by replacing selective search with an accurate and efficient Region Proposal Networks (RPN). Next, a few subsequent works @cite_12 @cite_32 @cite_8 @cite_20 @cite_35 @cite_23 @cite_15 strive to improve the accuracy and speed of two-stage detectors. Another line of works builds detectors in one-stage manner by skipping region proposal stage. YOLO @cite_2 jointly predicts bounding boxes and confidences of multiple categories as regression problem. SSD @cite_41 further improves it by utilizing multiple feature maps at different scales. Numerous extensions to the one-stage scheme have been proposed, e.g. @cite_24 @cite_9 @cite_27 @cite_19 . In this work, we adopt Faster R-CNN as the detection backbone for its robustness and flexibility.
{ "cite_N": [ "@cite_35", "@cite_8", "@cite_15", "@cite_41", "@cite_9", "@cite_1", "@cite_32", "@cite_44", "@cite_24", "@cite_27", "@cite_19", "@cite_23", "@cite_2", "@cite_31", "@cite_20", "@cite_12" ], "mid": [ "2949533892", "2964080601", "2774667964", "2193145675", "2743473392", "2613718673", "2601564443", "", "2579985080", "", "2796347433", "2769170451", "1483870316", "2102605133", "2769291631", "2407521645" ], "abstract": [ "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "Although it is well believed for years that modeling relations between objects would help object recognition, there has not been evidence that the idea is working in the deep learning era. All state-of-the-art object detection systems still rely on recognizing object instances individually, without exploiting their relations during learning. This work proposes an object relation module. It processes a set of objects simultaneously through interaction between their appearance feature and geometry, thus allowing modeling of their relations. It is lightweight and in-place. It does not require additional supervision and is easy to embed in existing networks. It is shown effective on improving object recognition and duplicate removal steps in the modern object detection pipeline. It verifies the efficacy of modeling object relations in CNN based detection. It gives rise to the first fully end-to-end object detector.", "We present R-FCN-3000, a large-scale real-time object detector in which objectness detection and classification are decoupled. To obtain the detection score for an RoI, we multiply the objectness score with the fine-grained classification score. Our approach is a modification of the R-FCN architecture in which position-sensitive filters are shared across different object classes for performing localization. For fine-grained classification, these position-sensitive filters are not needed. R-FCN-3000 obtains an mAP of 34.9 on the ImageNet detection dataset and outperforms YOLO-9000 by 18 while processing 30 images per second. We also show that the objectness learned by R-FCN-3000 generalizes to novel classes and the performance increases with the number of training object classes - supporting the hypothesis that it is possible to learn a universal objectness detector. Code will be made available.", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in their building modules. In this work, we introduce two new modules to enhance the transformation modeling capability of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from the target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the performance of our approach. For the first time, we show that learning dense spatial transformation in deep CNNs is effective for sophisticated vision tasks such as object detection and semantic segmentation. The code is released at https: github.com msracver Deformable-ConvNets.", "", "The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset.", "", "We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL", "The improvements in recent CNN-based object detection works, from R-CNN [11], Fast Faster R-CNN [10, 31] to recent Mask R-CNN [14] and RetinaNet [24], mainly come from new network, new framework, or novel loss design. But mini-batch size, a key factor in the training, has not been well studied. In this paper, we propose a Large MiniBatch Object Detector (MegDet) to enable the training with much larger mini-batch size than before (e.g. from 16 to 256), so that we can effectively utilize multiple GPUs (up to 128 in our experiments) to significantly shorten the training time. Technically, we suggest a learning rate policy and Cross-GPU Batch Normalization, which together allow us to successfully train a large mini-batch detector in much less time (e.g., from 33 hours to 4 hours), and achieve even better accuracy. The MegDet is the backbone of our submission (mmAP 52.5 ) to COCO 2017 Challenge, where we won the 1st place of Detection task.", "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "In this paper, we first investigate why typical two-stage methods are not as fast as single-stage, fast detectors like YOLO and SSD. We find that Faster R-CNN and R-FCN perform an intensive computation after or before RoI warping. Faster R-CNN involves two fully connected layers for RoI recognition, while R-FCN produces a large score maps. Thus, the speed of these networks is slow due to the heavy-head design in the architecture. Even if we significantly reduce the base model, the computation cost cannot be largely decreased accordingly. We propose a new two-stage detector, Light-Head R-CNN, to address the shortcoming in current two-stage approaches. In our design, we make the head of network as light as possible, by using a thin feature map and a cheap R-CNN subnet (pooling and single fully-connected layer). Our ResNet-101 based light-head R-CNN outperforms state-of-art object detectors on COCO while keeping time efficiency. More importantly, simply replacing the backbone with a tiny network (e.g, Xception), our Light-Head R-CNN gets 30.7 mmAP at 102 FPS on COCO, significantly outperforming the single-stage, fast detectors like YOLO and SSD on both speed and accuracy. Code will be made publicly available.", "We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN [7, 19] that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets) [10], for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: https: github.com daijifeng001 r-fcn." ] }
1904.11245
2942491430
Rendering synthetic data (e.g., 3D CAD-rendered images) to generate annotations for learning deep models in vision tasks has attracted increasing attention in recent years. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. To address this issue, recent progress in cross-domain recognition has featured the Mean Teacher, which directly simulates unsupervised domain adaptation as semi-supervised learning. The domain gap is thus naturally bridged with consistency regularization in a teacher-student scheme. In this work, we advance this Mean Teacher paradigm to be applicable for cross-domain detection. Specifically, we present Mean Teacher with Object Relations (MTOR) that novelly remolds Mean Teacher under the backbone of Faster R-CNN by integrating the object relations into the measure of consistency cost between teacher and student modules. Technically, MTOR firstly learns relational graphs that capture similarities between pairs of regions for teacher and student respectively. The whole architecture is then optimized with three consistency regularizations: 1) region-level consistency to align the region-level predictions between teacher and student, 2) inter-graph consistency for matching the graph structures between teacher and student, and 3) intra-graph consistency to enhance the similarity between regions of same class within the graph of student. Extensive experiments are conducted on the transfers across Cityscapes, Foggy Cityscapes, and SIM10k, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, we obtain a new record of single model: 22.8 of mAP on Syn2Real detection dataset.
As for the literature on domain adaptation, while it is quite vast, the most relevant category to our work is unsupervised domain adaptation in deep architectures. Recent works have involved discrepancy-based methods that guide the feature learning in DCNNs by minimizing the domain discrepancy with Maximum Mean Discrepancy (MMD) @cite_4 @cite_21 @cite_3 . Another branch is to exploit the domain confusion by learning a domain discriminator @cite_25 @cite_30 @cite_10 @cite_5 . Later, self-ensembling @cite_53 extends Mean Teacher @cite_28 for domain adaptation and establishes new records on several cross-domain recognition benchmarks. All of the aforementioned works focus on the domain adaptation for recognition, and recently much attention has been paid to domain adaptation in other tasks, e.g., object detection @cite_18 @cite_0 and semantic segmentation @cite_40 @cite_38 @cite_11 . For domain adaptation on object detection, @cite_29 uses transfer component analysis to learn the common transfer components across domains and @cite_0 aligns the region features with subspace alignment. More Recently, @cite_18 constructs a domain adaptive Faster R-CNN by learning domain classifiers on both image and instance levels.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_18", "@cite_4", "@cite_28", "@cite_53", "@cite_21", "@cite_29", "@cite_3", "@cite_0", "@cite_40", "@cite_5", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "2950420147", "2562192638", "2791406639", "2159291411", "2592691248", "2626754561", "2964278684", "2031691729", "2279034837", "", "2962976523", "2949987290", "2605488490", "2963826681", "2799122863" ], "abstract": [ "We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.", "Fully convolutional models for dense prediction have proven successful for a wide range of visual tasks. Such models perform well in a supervised setting, but performance can be surprisingly poor under domain shifts that appear mild to a human observer. For example, training on one city and testing on another in a different geographic region and or weather condition may result in significantly degraded performance due to pixel-level distribution shift. In this paper, we introduce the first domain adaptive semantic segmentation method, proposing an unsupervised adversarial approach to pixel prediction problems. Our method consists of both global and category specific adaptation techniques. Global domain alignment is performed using a novel semantic segmentation network with fully convolutional domain adversarial learning. This initially adapted space then enables category specific adaptation through a generalization of constrained weak learning, with explicit transfer of the spatial layout from the source to the target domains. Our approach outperforms baselines across different settings on multiple large-scale datasets, including adapting across various real city environments, different synthetic sub-domains, from simulated to real environments, and on a novel large-scale dash-cam dataset.", "Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.", "Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multikernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.", "The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35 on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55 to 6.28 , and on ImageNet 2012 with 10 of the labels from 35.24 to 9.11 .", "", "Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets.", "We study the use of domain adaptation and transfer learning techniques as part of a framework for adaptive object detection. Unlike recent applications of domain adaptation work in computer vision, which generally focus on image classification, we explore the problem of extreme class imbalance present when performing domain adaptation for object detection. The main difficulty caused by this imbalance is that test images contain millions or billions of negative image subwindows but just a few image subwindows containing positive instances, which makes it difficult to adapt to changes in the positive classes present new domains by simple techniques such as random sampling. We propose an initial approach to addressing this problem and apply our technique to vehicle detection in a challenging urban surveillance dataset, demonstrating the performance of our approach with various amounts of supervision, including the fully unsupervised case.", "The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks.", "", "Exploiting synthetic data to learn deep models has attracted increasing attention in recent years. However, the intrinsic domain difference between synthetic and real images usually causes a significant performance drop when applying the learned model to real world scenarios. This is mainly due to two reasons: 1) the model overfits to synthetic images, making the convolutional filters incompetent to extract informative representation for real images; 2) there is a distribution difference between synthetic and real data, which is also known as the domain adaptation problem. To this end, we propose a new reality oriented adaptation approach for urban scene semantic segmentation by learning from synthetic data. First, we propose a target guided distillation approach to learn the real image style, which is achieved by training the segmentation model to imitate a pretrained real style model using real images. Second, we further take advantage of the intrinsic spatial structure presented in urban scene images, and propose a spatial-aware adaptation scheme to effectively align the distribution of two domains. These two modules can be readily integrated with existing state-of-the-art semantic segmentation networks to improve their generalizability when adapting from synthetic to real urban scenes. We evaluate the proposed method on Cityscapes dataset by adapting from GTAV and SYNTHIA datasets, where the results demonstrate the effectiveness of our method.", "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They also can improve recognition despite the presence of domain shift or dataset bias: several adversarial approaches to unsupervised domain adaptation have recently been introduced, which reduce the difference between the training and test domain distributions and thus improve generalization performance. Prior generative approaches show compelling visualizations, but are not optimal on discriminative tasks and can be limited to smaller shifts. Prior discriminative approaches could handle larger domain shifts, but imposed tied weights on the model and did not exploit a GAN-based loss. We first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and we use this generalized view to better relate the prior approaches. We propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard cross-domain digit classification tasks and a new more difficult cross-modality object classification task.", "Domain Adaptation is an actively researched problem in Computer Vision. In this work, we propose an approach that leverages unsupervised data to bring the source and target distributions closer in a learned joint feature space. We accomplish this by inducing a symbiotic relationship between the learned embedding and a generative adversarial network. This is in contrast to methods which use the adversarial framework for realistic data generation and retraining deep models with such data. We demonstrate the strength and generality of our approach by performing experiments on three different tasks with varying levels of difficulty: (1) Digit classification (MNIST, SVHN and USPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain adaptation from synthetic to real data. Our method achieves state-of-the art performance in most experimental settings and by far the only GAN-based method that has been shown to work well across different datasets such as OFFICE and DIGITS.", "Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.", "The recent advances in deep neural networks have convincingly demonstrated high capability in learning vision models on large datasets. Nevertheless, collecting expert labeled datasets especially with pixel-level annotations is an extremely expensive process. An appealing alternative is to render synthetic data (e.g., computer games) and generate ground truth automatically. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. In this paper, we facilitate this issue from the perspectives of both visual appearance-level and representation-level domain adaptation. The former adapts source-domain images to appear as if drawn from the \"style\" in the target domain and the latter attempts to learn domain-invariant representations. Specifically, we present Fully Convolutional Adaptation Networks (FCAN), a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation Networks (RAN). AAN learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations. Extensive experiments are conducted on the transfer from GTA5 (game videos) to Cityscapes (urban street scenes) on semantic segmentation and our proposal achieves superior results when comparing to state-of-the-art unsupervised adaptation techniques. More remarkably, we obtain a new record: mIoU of 47.5 on BDDS (drive-cam videos) in an unsupervised setting." ] }
1904.11245
2942491430
Rendering synthetic data (e.g., 3D CAD-rendered images) to generate annotations for learning deep models in vision tasks has attracted increasing attention in recent years. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. To address this issue, recent progress in cross-domain recognition has featured the Mean Teacher, which directly simulates unsupervised domain adaptation as semi-supervised learning. The domain gap is thus naturally bridged with consistency regularization in a teacher-student scheme. In this work, we advance this Mean Teacher paradigm to be applicable for cross-domain detection. Specifically, we present Mean Teacher with Object Relations (MTOR) that novelly remolds Mean Teacher under the backbone of Faster R-CNN by integrating the object relations into the measure of consistency cost between teacher and student modules. Technically, MTOR firstly learns relational graphs that capture similarities between pairs of regions for teacher and student respectively. The whole architecture is then optimized with three consistency regularizations: 1) region-level consistency to align the region-level predictions between teacher and student, 2) inter-graph consistency for matching the graph structures between teacher and student, and 3) intra-graph consistency to enhance the similarity between regions of same class within the graph of student. Extensive experiments are conducted on the transfers across Cityscapes, Foggy Cityscapes, and SIM10k, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, we obtain a new record of single model: 22.8 of mAP on Syn2Real detection dataset.
Similar to previous work @cite_18 , our approach aims to leverage additional unlabeled target data for learning domain-invariant detector for cross-domain detection. The novelty is on the exploitation of Mean Teacher to bridge domain gap with consistency regularization in the context of object detection, which has not been previously explored. Moreover, the object relation between image regions is elegantly integrated into Mean Teacher paradigm to boost cross-domain detection.
{ "cite_N": [ "@cite_18" ], "mid": [ "2791406639" ], "abstract": [ "Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios." ] }
1904.11397
2940836056
In this work, we propose an end-to-end constrained clustering scheme to tackle the person re-identification (re-id) problem. Deep neural networks (DNN) have recently proven to be effective on person re-identification task. In particular, rather than leveraging solely a probe-gallery similarity, diffusing the similarities among the gallery images in an end-to-end manner has proven to be effective in yielding a robust probe-gallery affinity. However, existing methods do not apply probe image as a constraint, and are prone to noise propagation during the similarity diffusion process. To overcome this, we propose an intriguing scheme which treats person-image retrieval problem as a constrained clustering optimization problem, called deep constrained dominant sets (DCDS). Given a probe and gallery images, we re-formulate person re-id problem as finding a constrained cluster, where the probe image is taken as a constraint (seed) and each cluster corresponds to a set of images corresponding to the same person. By optimizing the constrained clustering in an end-to-end manner, we naturally leverage the contextual knowledge of a set of images corresponding to the given person-images. We further enhance the performance by integrating an auxiliary net alongside DCDS, which employs a multi-scale Resnet. To validate the effectiveness of our method we present experiments on several benchmark datasets and show that the proposed method can outperform state-of-the-art methods.
Dominant sets (DS) clustering @cite_40 and its constraint variant constrained dominant sets (CDS) @cite_24 have been employed in several recent computer vision applications ranging from person tracking @cite_0 , @cite_41 , geo-localization @cite_7 , image retrieval @cite_18 , @cite_1 , 3D object recognition @cite_42 , to Image segmentation and co-segmentation @cite_46 . Zemene @cite_24 presented CDS with its applications to interactive Image segmentation. Following, @cite_46 uses CDS to tackle both image segmentation and co-segmentation in interactive and unsupervised setup. Wang @cite_42 recently used dominant sets clustering in a recursive manner to select representative images from a collection of images and applied a pooling operation on the refined images, which survive at the recursive selection process. Nevertheless, none of the above works have attempted to leverage the dominant sets algorithm in an end-to-end manner.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_41", "@cite_42", "@cite_1", "@cite_24", "@cite_0", "@cite_40", "@cite_46" ], "mid": [ "", "2591694916", "2640181096", "2893477965", "2885633361", "2964234847", "", "2170432751", "2737723254" ], "abstract": [ "", "This paper presents a new approach for the challenging problem of geo-localization using image matching in a structured database of city-wide reference images with known GPS coordinates. We cast the geo-localization as a clustering problem of local image features. Akin to existing approaches to the problem, our framework builds on low-level features which allow local matching between images. For each local feature in the query image, we find its approximate nearest neighbors in the reference set. Next, we cluster the features from reference images using Dominant Set clustering, which affords several advantages over existing approaches. First, it permits variable number of nodes in the cluster, which we use to dynamically select the number of nearest neighbors for each query feature based on its discrimination value. Second, this approach is several orders of magnitude faster than existing approaches. Thus, we obtain multiple clusters (different local maximizers) and obtain a robust final solution to the problem using multiple weak solutions through constrained Dominant Set clustering on global image features, where we enforce the constraint that the query image must be included in the cluster. This second level of clustering also bypasses heuristic approaches to voting and selecting the reference image that matches to the query. We evaluate the proposed framework on an existing dataset of 102k street view images as well as a new larger dataset of 300k images, and show that it outperforms the state-of-the-art by 20 and 7 percent, respectively, on the two datasets.", "In this paper, a unified three-layer hierarchical approach for solving tracking problems in multiple non-overlapping cameras is proposed. Given a video and a set of detections (obtained by any person detector), we first solve within-camera tracking employing the first two layers of our framework and, then, in the third layer, we solve across-camera tracking by merging tracks of the same person in all cameras in a simultaneous fashion. To best serve our purpose, a constrained dominant sets clustering (CDSC) technique, a parametrized version of standard quadratic optimization, is employed to solve both tracking tasks. The tracking problem is caste as finding constrained dominant sets from a graph. In addition to having a unified framework that simultaneously solves within- and across-camera tracking, the third layer helps link broken tracks of the same person occurring during within-camera tracking. In this work, we propose a fast algorithm, based on dynamics from evolutionary game theory, which is efficient and salable to large-scale real-world applications.", "", "Aggregating different image features for image retrieval has recently shown its effectiveness. While highly effective, though, the question of how to uplift the impact of the best features for a specific query image persists as an open computer vision problem. In this paper, we propose a computationally efficient approach to fuse several hand-crafted and deep features, based on the probabilistic distribution of a given membership score of a constrained cluster in an unsupervised manner. First, we introduce an incremental nearest neighbor (NN) selection method, whereby we dynamically select k-NN to the query. We then build several graphs from the obtained NN sets and employ constrained dominant sets (CDS) on each graph G to assign edge weights which consider the intrinsic manifold structure of the graph, and detect false matches to the query. Finally, we elaborate the computation of feature positive-impact weight (PIW) based on the dispersive degree of the characteristics vector. To this end, we exploit the entropy of a cluster membership-score distribution. In addition, the final NN set bypasses a heuristic voting scheme. Experiments on several retrieval benchmark datasets show that our method can improve the state-of-the-art result.", "We propose a new approach to interactive image segmentation based on some properties of a family of quadratic optimization problems related to dominant sets, a well-known graph-theoretic notion of a cluster which generalizes the concept of a maximal clique to edge-weighted graphs. In particular, we show that by properly controlling a regularization parameter which determines the structure and the scale of the underlying problem, we are in a position to extract groups of dominant-set clusters which are constrained to contain user-selected elements. The resulting algorithm can deal naturally with any type of input modality, including scribbles, sloppy contours, and bounding boxes, and is able to robustly handle noisy annotations on the part of the user. Experiments on standard benchmark datasets show the effectiveness of our approach as compared to state-of-the-art algorithms on a variety of natural images under several input conditions.", "", "We develop a new graph-theoretic approach for pairwise data clustering which is motivated by the analogies between the intuitive concept of a cluster and that of a dominant set of vertices, a notion introduced here which generalizes that of a maximal complete subgraph to edge-weighted graphs. We establish a correspondence between dominant sets and the extrema of a quadratic form over the standard simplex, thereby allowing the use of straightforward and easily implementable continuous optimization techniques from evolutionary game theory. Numerical examples on various point-set and image segmentation problems confirm the potential of the proposed approach", "Image segmentation has come a long way since the early days of computer vision, and still remains a challenging task. Modern variations of the classical (purely bottom-up) approach, involve, e.g., some form of user assistance (interactive segmentation) or ask for the simultaneous segmentation of two or more images (co-segmentation). At an abstract level, all these variants can be thought of as \"constrained\" versions of the original formulation, whereby the segmentation process is guided by some external source of information. In this paper, we propose a new approach to tackle this kind of problems in a unified way. Our work is based on some properties of a family of quadratic optimization problems related to dominant sets, a well-known graph-theoretic notion of a cluster which generalizes the concept of a maximal clique to edge-weighted graphs. In particular, we show that by properly controlling a regularization parameter which determines the structure and the scale of the underlying problem, we are in a position to extract groups of dominant-set clusters that are constrained to contain predefined elements. In particular, we shall focus on interactive segmentation and co-segmentation (in both the unsupervised and the interactive versions). The proposed algorithm can deal naturally with several type of constraints and input modality, including scribbles, sloppy contours, and bounding boxes, and is able to robustly handle noisy annotations on the part of the user. Experiments on standard benchmark datasets show the effectiveness of our approach as compared to state-of-the-art algorithms on a variety of natural images under several input conditions and constraints." ] }
1904.11227
2941734409
In this paper, we introduce a new idea for unsupervised domain adaptation via a remold of Prototypical Networks, which learn an embedding space and perform classification via a remold of the distances to the prototype of each class. Specifically, we present Transferrable Prototypical Networks (TPN) for adaptation such that the prototypes for each class in source and target domains are close in the embedding space and the score distributions predicted by prototypes separately on source and target data are similar. Technically, TPN initially matches each target example to the nearest prototype in the source domain and assigns an example a "pseudo" label. The prototype of each class could then be computed on source-only, target-only and source-target data, respectively. The optimization of TPN is end-to-end trained by jointly minimizing the distance across the prototypes on three types of data and KL-divergence of score distributions output by each pair of the prototypes. Extensive experiments are conducted on the transfers across MNIST, USPS and SVHN datasets, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, we obtain an accuracy of 80.4 of single model on VisDA 2017 dataset.
Inspired by the recent advances in image representation using deep convolutional neural networks (DCNNs), a few deep architecture based methods have been proposed for unsupervised domain adaptation. In particular, one common deep solution for unsupervised domain adaptation is to guide the feature learning in DCNNs by minimizing the domain discrepancy with Maximum Mean Discrepancy (MMD) @cite_26 . MMD is an effective non-parametric metric for the comparisons between the distributions of source and target domains. @cite_14 is one of early works that incorporates MMD into DCNNs with regular supervised classification loss on source domain to learn both semantically meaningful and domain invariant representation. Later in @cite_3 , Long simultaneously exploit transferability of features from multiple layers via the multiple kernel variant of MMD. The work is further extended by adapting classifiers through a residual transfer module in @cite_2 . Most recently, @cite_19 explores domain shift reduction in joint distributions of the network activation of multiple task-specific layers.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_3", "@cite_19", "@cite_2" ], "mid": [ "1565327149", "2212660284", "2159291411", "2964278684", "2279034837" ], "abstract": [ "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.", "We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD).We present two distribution free tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.", "Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multikernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.", "Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets.", "The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks." ] }
1904.11227
2941734409
In this paper, we introduce a new idea for unsupervised domain adaptation via a remold of Prototypical Networks, which learn an embedding space and perform classification via a remold of the distances to the prototype of each class. Specifically, we present Transferrable Prototypical Networks (TPN) for adaptation such that the prototypes for each class in source and target domains are close in the embedding space and the score distributions predicted by prototypes separately on source and target data are similar. Technically, TPN initially matches each target example to the nearest prototype in the source domain and assigns an example a "pseudo" label. The prototype of each class could then be computed on source-only, target-only and source-target data, respectively. The optimization of TPN is end-to-end trained by jointly minimizing the distance across the prototypes on three types of data and KL-divergence of score distributions output by each pair of the prototypes. Extensive experiments are conducted on the transfers across MNIST, USPS and SVHN datasets, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, we obtain an accuracy of 80.4 of single model on VisDA 2017 dataset.
Another branch of unsupervised domain adaptation in DCNNs is to exploit the domain confusion by learning a domain discriminator @cite_6 @cite_4 @cite_25 @cite_7 @cite_9 . Here the domain discriminator is designed to predict the domain (source target) of each input sample and is trained in an adversarial fashion, similar to GANs @cite_15 , for learning domain invariant representation. For example, @cite_25 devises a domain confusion loss measured in domain discriminator for enforcing the learnt representation to be domain invariant. Similar in spirit, Ganin explore such domain confusion problem as a binary classification task and optimize the domain discriminator via a gradient reversal algorithm in @cite_6 . Coupled GANs @cite_16 directly applies GANs into domain adaptation problem to explicitly reduce the domain shifts by learning a joint distribution of multi-domain images. Recently, @cite_7 combines adversarial learning with discriminative feature learning for unsupervised domain adaptation. Most recently, @cite_27 extends domain discriminator by learning domain-invariant feature extractor and performing feature augmentation.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_9", "@cite_6", "@cite_27", "@cite_15", "@cite_16", "@cite_25" ], "mid": [ "2798702669", "2949987290", "2799122863", "2963826681", "2770856226", "2099471712", "2963784072", "2214409633" ], "abstract": [ "The recent advances in deep neural networks have demonstrated high capability in a wide variety of scenarios. Nevertheless, fine-tuning deep models in a new domain still requires a significant amount of labeled data despite expensive labeling efforts. A valid question is how to leverage the source knowledge plus unlabeled or only sparsely labeled target data for learning a new model in target domain. The core problem is to bring the source and target distributions closer in the feature space. In the paper, we facilitate this issue in an adversarial learning framework, in which a domain discriminator is devised to handle domain shift. Particularly, we explore the learning in the context of hashing problem, which has been studied extensively due to its great efficiency in gigantic data. Specifically, a novel Deep Domain Adaptation Hashing with Adversarial learning (DeDAHA) architecture is presented, which mainly consists of three components: a deep convolutional neural networks (CNN) for learning basic image frame representation followed by an adversary stream on one hand to optimize the domain discriminator, and on the other, to interact with each domain-specific hashing stream for encoding image representation to hash codes. The whole architecture is trained end-to-end by jointly optimizing two types of losses, i.e., triplet ranking loss to preserve the relative similarity ordering in the input triplets and adversarial loss to maximally fool the domain discriminator with the learnt source and target feature distributions. Extensive experiments are conducted on three domain transfer tasks, including cross-domain digits retrieval, image to image and image to video transfers, on several benchmarks. Our DeDAHA framework achieves superior results when compared to the state-of-the-art techniques.", "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They also can improve recognition despite the presence of domain shift or dataset bias: several adversarial approaches to unsupervised domain adaptation have recently been introduced, which reduce the difference between the training and test domain distributions and thus improve generalization performance. Prior generative approaches show compelling visualizations, but are not optimal on discriminative tasks and can be limited to smaller shifts. Prior discriminative approaches could handle larger domain shifts, but imposed tied weights on the model and did not exploit a GAN-based loss. We first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and we use this generalized view to better relate the prior approaches. We propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard cross-domain digit classification tasks and a new more difficult cross-modality object classification task.", "The recent advances in deep neural networks have convincingly demonstrated high capability in learning vision models on large datasets. Nevertheless, collecting expert labeled datasets especially with pixel-level annotations is an extremely expensive process. An appealing alternative is to render synthetic data (e.g., computer games) and generate ground truth automatically. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. In this paper, we facilitate this issue from the perspectives of both visual appearance-level and representation-level domain adaptation. The former adapts source-domain images to appear as if drawn from the \"style\" in the target domain and the latter attempts to learn domain-invariant representations. Specifically, we present Fully Convolutional Adaptation Networks (FCAN), a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation Networks (RAN). AAN learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations. Extensive experiments are conducted on the transfer from GTA5 (game videos) to Cityscapes (urban street scenes) on semantic segmentation and our proposal achieves superior results when comparing to state-of-the-art unsupervised adaptation techniques. More remarkably, we obtain a new record: mIoU of 47.5 on BDDS (drive-cam videos) in an unsupervised setting.", "Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.", "Recent works showed that Generative Adversarial Networks (GANs) can be successfully applied in unsupervised domain adaptation, where, given a labeled source dataset and an unlabeled target dataset, the goal is to train powerful classifiers for the target samples. In particular, it was shown that a GAN objective function can be used to learn target features indistinguishable from the source ones. In this work, we extend this framework by (i) forcing the learned feature extractor to be domain-invariant, and (ii) training it through data augmentation in the feature space, namely performing feature augmentation. While data augmentation in the image space is a well established technique in deep learning, feature augmentation has not yet received the same level of attention. We accomplish it by means of a feature generator trained by playing the GAN minimax game against source features. Results show that both enforcing domain-invariance and performing feature augmentation lead to superior or comparable performance to state-of-the-art results in several unsupervised domain adaptation benchmarks.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings." ] }
1904.11227
2941734409
In this paper, we introduce a new idea for unsupervised domain adaptation via a remold of Prototypical Networks, which learn an embedding space and perform classification via a remold of the distances to the prototype of each class. Specifically, we present Transferrable Prototypical Networks (TPN) for adaptation such that the prototypes for each class in source and target domains are close in the embedding space and the score distributions predicted by prototypes separately on source and target data are similar. Technically, TPN initially matches each target example to the nearest prototype in the source domain and assigns an example a "pseudo" label. The prototype of each class could then be computed on source-only, target-only and source-target data, respectively. The optimization of TPN is end-to-end trained by jointly minimizing the distance across the prototypes on three types of data and KL-divergence of score distributions output by each pair of the prototypes. Extensive experiments are conducted on the transfers across MNIST, USPS and SVHN datasets, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, we obtain an accuracy of 80.4 of single model on VisDA 2017 dataset.
In summary, our approach belongs to domain discrepancy based methods. Similar to previous approaches @cite_19 @cite_14 , our TPN leverages additional unlabeled target data for learning task-specific classifiers. The novelty is on the exploitation of multi-granular domain discrepancy in Prototypical Networks, at class-level and sample-level, that has not been fully explored in the literature. Class-level domain discrepancy is reduced by learning similar prototypes of each class in different domains, while sample-level discrepancy is by enforcing similar score distributions across prototypes of different domains.
{ "cite_N": [ "@cite_19", "@cite_14" ], "mid": [ "2964278684", "1565327149" ], "abstract": [ "Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task." ] }
1904.11272
2942143931
We propose a local adversarial disentangling network (LADN) for facial makeup and de-makeup. Central to our method are multiple and overlapping local adversarial discriminators in a content-style disentangling network for achieving local detail transfer between facial images, with the use of asymmetric loss functions for dramatic makeup styles with high-frequency details. Existing techniques do not demonstrate or fail to transfer high-frequency details in a global adversarial setting, or train a single local discriminator only to ensure image structure consistency and thus work only for relatively simple styles. Unlike others, our proposed local adversarial discriminators can distinguish whether the generated local image details are consistent with the corresponding regions in the given reference image in cross-image style transfer in an unsupervised setting. Incorporating these technical contributions, we achieve not only state-of-the-art results on conventional styles but also novel results involving complex and dramatic styles with high-frequency details covering large areas across multiple facial features. A carefully designed dataset of unpaired before and after makeup images will be released.
Makeup transfer and removal. Tong al @cite_3 first tackled this problem by solving the mapping of cosmetic contributions of color and subtle surface geometry. However, their method requires the input to be in pairs of well-aligned before-makeup and after-makeup images and thus the practicability is limited. Guo al @cite_21 proposed to decompose the source and reference images into face structure, skin detail, and color layers and then transfer information on each layer correspondingly. Li al @cite_20 decomposed the image into intrinsic image layers, and used physically-based reflectance models to manipulate each layer to achieve makeup transfer. Recently, a number of makeup recommendation and synthesis systems have been developed @cite_1 @cite_15 @cite_14 , but their contribution is on makeup recommendation and the capability of makeup transfer is limited. As recently the style transfer problem has been successfully formulated as maximizing feature similarities in deep neural networks, Liu al @cite_25 proposed to transfer makeup style by locally applying the style transfer technique on facial components.
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_1", "@cite_3", "@cite_15", "@cite_25", "@cite_20" ], "mid": [ "", "2121293528", "2065820768", "2161022057", "", "2963832775", "1916003155" ], "abstract": [ "", "This paper introduces an approach of creating face makeup upon a face image with another image as the style example. Our approach is analogous to physical makeup, as we modify the color and skin detail while preserving the face structure. More precisely, we first decompose the two images into three layers: face structure layer, skin detail layer, and color layer. Thereafter, we transfer information from each layer of one image to corresponding layer of the other image. One major advantage of the proposed method lies in that only one example image is required. This renders face makeup by example very convenient and practical. Equally, this enables some additional interesting applications, such as applying makeup by a portraiture. The experiment results demonstrate the effectiveness of the proposed approach in faithfully transferring makeup.", "Beauty e-Experts, a fully automatic system for hairstyle and facial makeup recommendation and synthesis, is developed in this work. Given a user-provided frontal face image with short bound hair and no light makeup, the Beauty e-Experts system can not only recommend the most suitable hairdo and makeup, but also show the synthetic effects. To obtain enough knowledge for beauty modeling, we build the Beauty e-Experts Database, which contains 1,505 attractive female photos with a variety of beauty attributes and beauty-related attributes annotated. Based on this Beauty e-Experts Dataset, two problems are considered for the Beauty e-Experts system: what to recommend and how to wear, which describe a similar process of selecting hairstyle and cosmetics in our daily life. For the what-to-recommend problem, we propose a multiple tree-structured super-graphs model to explore the complex relationships among the high-level beauty attributes, mid-level beauty-related attributes and low-level image features, and then based on this model, the most compatible beauty attributes for a given facial image can be efficiently inferred. For the how-to-wear problem, an effective and efficient facial image synthesis module is designed to seamlessly synthesize the recommended hairstyle and makeup into the user facial image. Extensive experimental evaluations and analysis on testing images of various conditions well demonstrate the effectiveness of the proposed system.", "Cosmetic makeup is used worldwide as a means to enhance beauty and express moods. An art form in its own right, cosmetic styles continuously change and evolve to reflect cultural and societ al trends. While countless magazines and books are dedicated to demonstrating cosmetic art, the actual application of makeup still remains a physical endeavor. In this paper, we describe a procedure to apply cosmetic makeup to the image of a person's face with the click of a mouse. Our approach works from before- and-after example images created by professional makeup artists. Using our \"cosmetic-transfer\" procedure, we can realistically transfer the cosmetic style captured in the example-pair to another person's face. This greatly reduces the time and effort needed to demonstrate a cosmetic style on a new person's face. In addition, our approach can be used to mix-and- match, and even fine-tune, example styles, all virtually, without the need for any physical makeup.", "", "In this paper, we propose a novel Deep Localized Makeup Transfer Network to automatically recommend the most suitable makeup for a female and synthesis the makeup on her face. Given a before-makeup face, her most suitable makeup is determined automatically. Then, both the before-makeup and the reference faces are fed into the proposed Deep Transfer Network to generate the after-makeup face. Our end-to-end makeup transfer network have several nice properties including: (1) with complete functions: including foundation, lip gloss, and eye shadow transfer; (2) cosmetic specific: different cosmetics are transferred in different manners; (3) localized: different cosmetics are applied on different facial regions; (4) producing naturally looking results without obvious artifacts; (5) controllable makeup lightness: various results from light makeup to heavy makeup can be generated. Qualitative and quantitative experiments show that our network performs much better than the methods of [Guo and Sim, 2009] and two variants of NerualStyle [, 2015a].", "We present a method for simulating makeup in a face image. To generate realistic results without detailed geometric and reflectance measurements of the user, we propose to separate the image into intrinsic image layers and alter them according to proposed adaptations of physically-based reflectance models. Through this layer manipulation, the measured properties of cosmetic products are applied while preserving the appearance characteristics and lighting conditions of the target face. This approach is demonstrated on various forms of cosmetics including foundation, blush, lipstick, and eye shadow. Experimental results exhibit a close approximation to ground truth images, without artifacts such as transferred personal features and lighting effects that degrade the results of image-based makeup transfer methods." ] }
1904.11272
2942143931
We propose a local adversarial disentangling network (LADN) for facial makeup and de-makeup. Central to our method are multiple and overlapping local adversarial discriminators in a content-style disentangling network for achieving local detail transfer between facial images, with the use of asymmetric loss functions for dramatic makeup styles with high-frequency details. Existing techniques do not demonstrate or fail to transfer high-frequency details in a global adversarial setting, or train a single local discriminator only to ensure image structure consistency and thus work only for relatively simple styles. Unlike others, our proposed local adversarial discriminators can distinguish whether the generated local image details are consistent with the corresponding regions in the given reference image in cross-image style transfer in an unsupervised setting. Incorporating these technical contributions, we achieve not only state-of-the-art results on conventional styles but also novel results involving complex and dramatic styles with high-frequency details covering large areas across multiple facial features. A carefully designed dataset of unpaired before and after makeup images will be released.
In addition to makeup transfer, the problem of digitally removing makeup from portraits has also gained some attention from researchers @cite_9 @cite_19 . But all of them treat makeup transfer and removal as separate problems. Chang al @cite_24 formulated the makeup transfer and removal problem as an unsupervised image domain transfer problem. They augmented the CycleGAN with a makeup reference, so that the specific makeup style of the reference image can be transferred to the non-makeup face to generate photo-realistic results. However, since they crop out the regions of eyes and mouth and train them separately as local paths, more emphasis is given to these regions. Therefore, the makeup style on other regions (such as nose, cheeks, forehead or the overall skin tone foundation) cannot be handled properly. Very recently, Li al @cite_18 also tackled the makeup transfer and removal problem together by incorporating makeup loss'' into the CycleGAN. Although their network structure is somewhat similar, we are the first to achieve disentanglement of makeup latent and transfer and removal on extreme and dramatic makeup styles.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_9", "@cite_18" ], "mid": [ "2798600195", "", "2342479552", "2896240508" ], "abstract": [ "This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo. Our unsupervised learning approach relies on a new framework of cycle-consistent generative adversarial networks. Different from the image domain transfer problem, our style transfer problem involves two asymmetric functions: a forward function encodes example-based style transfer, whereas a backward function removes the style. We construct two coupled networks to implement these functions - one that transfers makeup style and a second that can remove makeup - such that the output of their successive application to an input photo will match the input. The learned style network can then quickly apply an arbitrary makeup style to an arbitrary photo. We demonstrate the effectiveness on a broad range of portraits and styles.", "", "In this work, we propose a novel automatic makeup detector and remover framework. For makeup detector, a locality-constrained low-rank dictionary learning algorithm is used to determine and locate the usage of cosmetics. For the challenging task of makeup removal, a locality-constrained coupled dictionary learning (LC-CDL) framework is proposed to synthesize non-makeup face, so that the makeup could be erased according to the style. Moreover, we build a stepwise makeup dataset (SMU) which to the best of our knowledge is the first dataset with procedures of makeup. This novel technology itself carries many practical applications, e.g. products recommendation for consumers; user-specified makeup tutorial; security applications on makeup face verification. Finally, our system is evaluated on three existing (VMU, MIW, YMU) and one own-collected makeup datasets. Experimental results have demonstrated the effectiveness of DL-based method on makeup detection. The proposed LC-CDL shows very promising performance on makeup removal regarding on the structure similarity. In addition, the comparison of face verification accuracy with presence or absence of makeup is presented, which illustrates an application of our automatic makeup remover system in the context of face verification with facial makeup.", "Facial makeup transfer aims to translate the makeup style from a given reference makeup face image to another non-makeup one while preserving face identity. Such an instance-level transfer problem is more challenging than conventional domain-level transfer tasks, especially when paired data is unavailable. Makeup style is also different from global styles (e.g., paintings) in that it consists of several local styles cosmetics, including eye shadow, lipstick, foundation, and so on. Extracting and transferring such local and delicate makeup information is infeasible for existing style transfer methods. We address the issue by incorporating both global domain-level loss and local instance-level loss in an dual input output Generative Adversarial Network, called BeautyGAN. Specifically, the domain-level transfer is ensured by discriminators that distinguish generated images from domains' real samples. The instance-level loss is calculated by pixel-level histogram loss on separate local facial regions. We further introduce perceptual loss and cycle consistency loss to generate high quality faces and preserve identity. The overall objective function enables the network to learn translation on instance-level through unsupervised adversarial learning. We also build up a new makeup dataset that consists of 3834 high-resolution face images. Extensive experiments show that BeautyGAN could generate visually pleasant makeup faces and accurate transferring results. Data and code are available at http: liusi-group.com projects BeautyGAN." ] }
1904.11272
2942143931
We propose a local adversarial disentangling network (LADN) for facial makeup and de-makeup. Central to our method are multiple and overlapping local adversarial discriminators in a content-style disentangling network for achieving local detail transfer between facial images, with the use of asymmetric loss functions for dramatic makeup styles with high-frequency details. Existing techniques do not demonstrate or fail to transfer high-frequency details in a global adversarial setting, or train a single local discriminator only to ensure image structure consistency and thus work only for relatively simple styles. Unlike others, our proposed local adversarial discriminators can distinguish whether the generated local image details are consistent with the corresponding regions in the given reference image in cross-image style transfer in an unsupervised setting. Incorporating these technical contributions, we achieve not only state-of-the-art results on conventional styles but also novel results involving complex and dramatic styles with high-frequency details covering large areas across multiple facial features. A carefully designed dataset of unpaired before and after makeup images will be released.
Global and local discriminators. Since Goodfellow al @cite_11 proposed the generative adversarial networks (GANs), many related works have employed discriminators in a global setting. In the domain translation problem, while a global discriminator can distinguish images from different domains, it can only capture global structures for a generator to learn. Local (patch) discriminators can compensate this by assuming independence between pixels separated by a patch diameter and modeling images as Markov random fields. Li al @cite_16 first utilized the discriminator loss for different local patches to train a generative neural network. Such a PatchGAN" structure was also used in @cite_7 , where a local discriminator was incorporated with an L1 loss to encourage the generator to capture local high-frequency details. In image completion @cite_4 @cite_5 , a global discriminator was used to maintain global consistency of image structures, while a local discriminator was used to ensure consistency of the generated patches in the completed region with the image context. Azadi al @cite_0 similarly incorporated local discriminator together with a global discriminator on the font style transfer problem.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_0", "@cite_5", "@cite_16", "@cite_11" ], "mid": [ "2738588019", "2963073614", "2962968458", "", "2339754110", "2099471712" ], "abstract": [ "We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.", "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.", "In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface. To generate a set of multi-content images following a consistent style from very few examples, we propose an end-to-end stacked conditional GAN model considering content along channels and style along network layers. Our proposed network transfers the style of given glyphs to the contents of unseen ones, capturing highly stylized fonts found in the real-world such as those on movie posters or infographics. We seek to transfer both the typographic stylization (ex. serifs and ears) as well as the textual stylization (ex. color gradients and effects.) We base our experiments on our collected data set including 10,000 fonts with different styles and demonstrate effective generalization from a very small number of observed glyphs.", "", "This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required at generation time, our run-time performance (0.25 M pixel images at 25 Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1904.11376
2941193495
Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semi-supervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan (default or non-default). The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring.
Banks decide whether to grant credit to new applications as well as how to deal with existing customers, e.g. deciding whether credit limits should be increased and determining which marketing campaign is most appropriate. The tools that help banks with the first problem are called credit scoring models, while behavioral scoring models are used to handle exiting customers @cite_17 . Both type of models estimate the ability that a borrower will be unable to meet its debt obligations, which is referred to as default probability. This research focuses on reject inference to improve the classification accuracy of credit scoring models by utilizing the rejected applications. In Table ), we present an updated research overview on reject inference in credit scoring extending the one presented in @cite_12 .
{ "cite_N": [ "@cite_12", "@cite_17" ], "mid": [ "2296034778", "1980770954" ], "abstract": [ "Semi-supervised Support Vector Machines for reject inference are proposed.The method uses information of both the accepted and rejected applicants.The method deals with labelled and unlabelled classes of the outcome.The model is tested on real consumer loans with a low acceptance rate.Predictive accuracy is improved by the new model compared to traditional methods.", "Abstract Credit scoring and behavioural scoring are the techniques that help organisations decide whether or not to grant credit to consumers who apply to them. This article surveys the techniques used — both statistical and operational research based — to support these decisions. It also discusses the need to incorporate economic conditions into the scoring systems and the way the systems could change from estimating the probability of a consumer defaulting to estimating the profit a consumer will bring to the lending organisation — two of the major developments being attempted in the area. It points out how successful has been this under-researched area of forecasting financial risk." ] }
1904.11376
2941193495
Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semi-supervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan (default or non-default). The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring.
A simple approach for reject inference is augmentation @cite_44 . In this approach, the accepted applications are re-weighted to represent the entire population. The common way to find these weights is using the accept reject probability. For example if a given application has a probability of being rejected of 0.80, then all similar applications would be weighted up @math times @cite_2 . None of the empirical research using augmentation shows significant improvements in either correcting the selection bias or improving model performance, see @cite_2 @cite_34 @cite_43 @cite_0 @cite_27 @cite_41 @cite_14 . The augmentation technique assumes that the default probability is independent of whether the loan is accepted or rejected @cite_38 . However, @cite_6 shows empirically that this assumption is wrong.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_41", "@cite_6", "@cite_44", "@cite_43", "@cite_0", "@cite_27", "@cite_2", "@cite_34" ], "mid": [ "", "2064065622", "2070533641", "2023805477", "2788414579", "2020821583", "2043324736", "2085237221", "178448420", "1972671391" ], "abstract": [ "", "This article seeks to gain insight into the influence of sample bias in a consumer credit scoring model. In earlier research, sample bias has been suggested to pose a sizeable threat to predictive performance and profitability due to its implications on either population drainage or biased estimates. Contrary to previous – mainly theoretical – research on sample bias, the unique features of the dataset used in this study provide the opportunity to investigate the issue in an empirical setting. Based on the data of a mail-order company offering short term consumer credit to their consumers, we show that (i) given a certain sample size, sample bias has a significant effect on consumer credit-scoring performance and profitability, (ii) its effect is composed of the inclusion of rejected orders in the scoring model, and the inclusion of these orders into the variable-selection process, and (iii) the impact of the effect of sample bias on consumer credit scoring performance and profitability is modest.", "Abstract The parameters of application scorecards are usually estimated using a sample that excludes rejected applicants which may prove biased when applied to all applicants. This paper uses a rare sample that includes those who would normally be rejected to examine the extent to which (1) the exclusion of rejected applicants undermines the predictive performance of a scorecard based only on accepted applicants, and (2) reject inference techniques can remedy the influence of this exclusion.", "AbstractTechnology evaluation has become a critical part of technology investment, and accurate evaluation can lead more funds to the companies that have innovative technology. However, existing processes have a weakness in that it considers only accepted applicants at the application stage. We analyse the effectiveness of technology evaluation model that encompasses both accepted and rejected applicants and compare its performance with the original accept-only model. Also, we include the analysis of reject inference technique, bivariate probit model, in order to see if the reject inference technique is of use against the accept-only model. The results show that sample selection bias of the accept-only model exists and the reject inference technique improves the accept-only model. However, the reject inference technique does not completely resolve the problem of sample selection bias.", "", "Many researchers see the need for reject inference in credit scoring models to come from a sample selection problem whereby a missing variable results in omitted variable bias. Alternatively, practitioners often see the problem as one of missing data where the relationship in the new model is biased because the behaviour of the omitted cases differs from that of those who make up the sample for a new model. To attempt to correct for this, differential weights are applied to the new cases. The aim of this paper is to see if the use of both a Heckman style sample selection model and the use of sampling weights, together, will improve predictive performance compared with either technique used alone. This paper will use a sample of applicants in which virtually every applicant was accepted. This allows us to compare the actual performance of each model with the performance of models which are based only on accepted cases.", "One of the aims of credit scoring models is to predict the probability of repayment of any applicant and yet such models are usually parameterised using a sample of accepted applicants only. This may lead to biased estimates of the parameters. In this paper we examine two issues. First, we compare the classification accuracy of a model based only on accepted applicants, relative to one based on a sample of all applicants. We find only a minimal difference, given the cutoff scores for the old model used by the data supplier. Using a simulated model we examine the predictive performance of models estimated from bands of applicants, ranked by predicted creditworthiness. We find that the lower the risk band of the training sample, the less accurate the predictions for all applicants. We also find that the lower the risk band of the training sample, the greater the overestimate of the true performance of the model, when tested on a sample of applicants within the same risk band — as a financial institution would do. The overestimation may be very large. Second, we examine the predictive accuracy of a bivariate probit model with selection (BVP). This parameterises the accept–reject model allowing for (unknown) omitted variables to be correlated with those of the original good–bad model. The BVP model may improve accuracy if the loan officer has overridden a scoring rule. We find that a small improvement when using the BVP model is sometimes possible.", "We generalize an empirical likelihood approach to deal with missing data to a model of consumer credit scoring. An application to recent consumer credit data shows that our procedure yields parameter estimates which are significantly different (both statistically and economically) from the case where customers who were refused credit are ignored. This has obvious implications for commercial banks as it shows that refused customers should not be ignored when developing scorecards for the retail business. We also show that forecasts of defaults derived from the method proposed in this paper improve upon the standard ones when refused customers do not enter the estimation data set.", "The Credit Scoring Toolkit provides an all-encompassing view of the use of statistical models to assess retail credit risk and provide automated decisions. In eight modules, the book provides frameworks for both theory and practice. It first explores the economic justification and history of Credit Scoring, risk linkages and decision science, statistical and mathematical tools, the assessment of business enterprises, and regulatory issues ranging from data privacy to Basel II. It then provides a practical how-to-guide for scorecard development, including data collection, scorecard implementation, and use within the credit risk management cycle. Including numerous real-life examples and an extensive glossary and bibliography, the text assumes little prior knowledge making it an indispensable desktop reference for graduate students in statistics, business, economics and finance, MBA students, credit risk and financial practitioners.", "If a credit scoring model is built using only applicants who have been previously accepted for credit such a non-random sample selection may produce bias in the estimated model parameters and accordingly the model's predictions of repayment performance may not be optimal. Previous empirical research suggests that omission of rejected applicants has a detrimental impact on model estimation and prediction. This paper explores the extent to which, given the previous cutoff score applied to decide on accepted applicants, the number of included variables influences the efficacy of a commonly used reject inference technique, reweighting. The analysis benefits from the availability of a rare sample, where virtually no applicant was denied credit. The general indication is that the efficacy of reject inference is little influenced by either model leanness or interaction between model leanness and the rejection rate that determined the sample. However, there remains some hint that very lean models may benefit from reject inference where modelling is conducted on data characterized by a very high rate of applicant rejection." ] }