aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1904.08643
2936646008
Style transfer is a problem of rendering a content image in the style of another style image. A natural and common practical task in applications of style transfer is to adjust the strength of stylization. Algorithm of (2016) provides this ability by changing the weighting factors of content and style losses but is computationally inefficient. Real-time style transfer introduced by (2016) enables fast stylization of any image by passing it through a pre-trained transformer network. Although fast, this architecture is not able to continuously adjust style strength. We propose an extension to real-time style transfer that allows direct control of style strength at inference, still requiring only a single transformer network. We conduct qualitative and quantitative experiments that demonstrate that the proposed method is capable of smooth stylization strength control and removes certain stylization artifacts appearing in the original real-time style transfer method. Comparisons with alternative real-time style transfer algorithms, capable of adjusting stylization strength, show that our method reproduces style with more details.
Further research targeted to propose a transformer network architecture capable of applying different styles simultaneously. Main idea was to hold most of architecture fixed and to vary only particular elements depending on the style. @cite_0 used separate convolution filter for each style. @cite_10 used different parameters of instance normalization, but these coefficients still needed to be optimized. @cite_5 introduced separate style prediction network capable of predicting these coefficients without running a separate optimization process. Each set of instance normalization coefficients encoded particular style. By weighting these coefficients a transformer network could yield combinations of respective styles. However, the problem of stylization strength control was not addressed in these works.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_10" ], "mid": [ "2604737827", "2620076854", "2545656684" ], "abstract": [ "We propose StyleBank, which is composed of multiple convolution filter banks and each filter bank explicitly represents one style, for neural image style transfer. To transfer an image to a specific style, the corresponding filter bank is operated on top of the intermediate feature embedding produced by a single auto-encoder. The StyleBank and the auto-encoder are jointly learnt, where the learning is conducted in such a way that the auto-encoder does not encode any style information thanks to the flexibility introduced by the explicit filter bank representation. It also enables us to conduct incremental learning to add a new image style by learning a new filter bank while holding the auto-encoder fixed. The explicit style representation along with the flexible network design enables us to fuse styles at not only the image level, but also the region level. Our method is the first style transfer network that links back to traditional texton mapping methods, and hence provides new understanding on neural style transfer. Our method is easy to train, runs in real-time, and produces results that qualitatively better or at least comparable to existing methods.", "In this paper, we present a method which combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks to allow real-time stylization using any content style image pair. We build upon recent work leveraging conditional instance normalization for multi-style transfer networks by learning to predict the conditional instance normalization parameters directly from a style image. The model is successfully trained on a corpus of roughly 80,000 paintings and is able to generalize to paintings previously unobserved. We demonstrate that the learned embedding space is smooth and contains a rich structure and organizes semantic information associated with paintings in an entirely unsupervised manner.", "The diversity of painting styles represents a rich visual vocabulary for the construction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level features of paintings, if not images in general. In this work we investigate the construction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network generalizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new painting styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style." ] }
1904.08643
2936646008
Style transfer is a problem of rendering a content image in the style of another style image. A natural and common practical task in applications of style transfer is to adjust the strength of stylization. Algorithm of (2016) provides this ability by changing the weighting factors of content and style losses but is computationally inefficient. Real-time style transfer introduced by (2016) enables fast stylization of any image by passing it through a pre-trained transformer network. Although fast, this architecture is not able to continuously adjust style strength. We propose an extension to real-time style transfer that allows direct control of style strength at inference, still requiring only a single transformer network. We conduct qualitative and quantitative experiments that demonstrate that the proposed method is capable of smooth stylization strength control and removes certain stylization artifacts appearing in the original real-time style transfer method. Comparisons with alternative real-time style transfer algorithms, capable of adjusting stylization strength, show that our method reproduces style with more details.
Generative adversarial networks @cite_4 (GANs) were also successfully applied to style transfer, for instance, in @cite_15 @cite_9 . The difference to the framework considered in this paper is that GANs retrieve style from multiple style images instead of just one.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_4" ], "mid": [ "2899388690", "2962793481", "" ], "abstract": [ "Facial caricature is an art form of drawing faces in an exaggerated way to convey humor or sarcasm. In this paper, we propose the first Generative Adversarial Network (GAN) for unpaired photo-to-caricature translation, which we call \"CariGANs\". It explicitly models geometric exaggeration and appearance stylization using two components: CariGeoGAN, which only models the geometry-to-geometry transformation from face photos to caricatures, and CariStyGAN, which transfers the style appearance from caricatures to face photos without any geometry deformation. In this way, a difficult cross-domain translation problem is decoupled into two easier tasks. The perceptual study shows that caricatures generated by our CariGANs are closer to the hand-drawn ones, and at the same time better persevere the identity, compared to state-of-the-art methods. Moreover, our CariGANs allow users to control the shape exaggeration degree and change the color texture style by tuning the parameters or giving an example caricature.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "" ] }
1904.08761
2935776047
Differently from animals, robots can record its experience correctly for long time. We propose a novel algorithm that runs a particle filter on the time sequence of the experience. It can be applied to some teach-and-replay tasks. In a task, the trainer controls a robot, and the robot records its sensor readings and its actions. We name the sequence of the record an episode, which is derived from the episodic memory of animals. After that, the robot executes the particle filter so as to find a similar situation with the current one from the episode. If the robot chooses the action taken in the similar situation, it can replay the taught behavior. We name this algorithm the particle filter on episode (PFoE). The robot with PFoE shows not only a simple replay of a behavior but also recovery motion from skids and interruption. In this paper, we evaluate the properties of PFoE with a small mobile robot.
PFoE uses a particle filter, which is frequently used for self-localization of a robot @cite_22 @cite_26 . Particle filters for self-localization are known by the name of Monte Carlo localization (MCL). MCL concentrates its particles in some areas where its owner may exist. The particles are candidates of the actual poses of the robot. Since the calculation amount is in proportion to the number of particles, MCL can work in a large environment. PFoE also utilizes this advantage of particle filters. Though MCL needs a map of the environment in which a robot works, PFoE does not need it.
{ "cite_N": [ "@cite_26", "@cite_22" ], "mid": [ "1973876344", "1577509784" ], "abstract": [ "Over the past few years, particle filters have been applied with great success to a variety of state estimation problems. In this paper we present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets during the estimation process. The key idea of the KLD-sampling method is to bound the approximation error introduced by the sample-based representation of the particle filter. The name KLD-sampling is due to the fact that we measure the approximation error using the Kullback-Leibler distance. Our adaptation approach chooses a small number of samples if the density is focused on a small part of the state space, and it chooses a large number of samples if the state uncertainty is high. Both the implementation and computation overhead of this approach are small. Extensive experiments using mobile robot localization as a test application show that our approach yields drastic improvements over particle filters with fixed sample set sizes and over a previously...", "To navigate reliably in indoor environments, a mobile robot must know where it is. Thus, reliable position estimation is a key problem in mobile robotics. We believe that probabilistic approaches are among the most promising candidates to providing a comprehensive and real-time solution to the robot localization problem. However, current methods still face considerable hurdles. In particular the problems encountered are closely related to the type of representation used to represent probability densities over the robot's state space. Earlier work on Bayesian filtering with particle-based density representations opened up a new approach for mobile robot localization based on these principles. We introduce the Monte Carlo localization method, where we represent the probability density involved by maintaining a set of samples that are randomly drawn from it. By using a sampling-based representation we obtain a localization method that can represent arbitrary distributions. We show experimentally that the resulting method is able to efficiently localize a mobile robot without knowledge of its starting location. It is faster, more accurate and less memory-intensive than earlier grid-based methods,." ] }
1904.08761
2935776047
Differently from animals, robots can record its experience correctly for long time. We propose a novel algorithm that runs a particle filter on the time sequence of the experience. It can be applied to some teach-and-replay tasks. In a task, the trainer controls a robot, and the robot records its sensor readings and its actions. We name the sequence of the record an episode, which is derived from the episodic memory of animals. After that, the robot executes the particle filter so as to find a similar situation with the current one from the episode. If the robot chooses the action taken in the similar situation, it can replay the taught behavior. We name this algorithm the particle filter on episode (PFoE). The robot with PFoE shows not only a simple replay of a behavior but also recovery motion from skids and interruption. In this paper, we evaluate the properties of PFoE with a small mobile robot.
PFoE handles a decision making problem as a hidden Markov model (HMM). In an HMM @cite_7 @cite_1 , a finite number of states are defined, and state transitions of the system are hidden from an observer. Instead, the observer can receive a sequence of observations. Many algorithms on HMMs have been proposed for pattern recognition problems @cite_1 . Techniques for HMMs are also utilized for motion recognition or motion generation of robots @cite_16 @cite_24 . They have focused their attention on recognition of sequences of motion. To make a robot move in the real world with the recognition result, feedback control as researched in the above teach-and-replay methods will be required. In other research fields, combinations of an HMM model and a particle filter can be seen @cite_23 @cite_12 . Their purposes are also recognition.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_24", "@cite_23", "@cite_16", "@cite_12" ], "mid": [ "2007321142", "", "1989784841", "1969682858", "2056639157", "2108215203" ], "abstract": [ "", "", "Learning by imitation renders a means for more natural human-robot interaction and is potentially the primary form of teaching. Imitation provides a potential means of automatically programming complex systems without extensive trials. This paper presents a method of imitation learning based on mapping the demonstrator's motion patterns to the observer self-posture. The self-posture is acquired through observing and recognizing the mirror image of its own body posture while performing the action in front of a mirror. First, different kinds of behavioural actions are learned through motor babbling by randomly performing different actions. The actions are learned through a novel probabilistic model called Topological Gaussian Adaptive Resonance Hidden Markov Model. During learning, the observer also extracts visual features through optical flow from self-observation in a mirror image. Second, after learning, a visuo-motor association is developed through novel Topological Gaussian Adaptive Resonance Associative Memory. Finally, after learning the demonstrator performs a similar action in front of the robot and the he robot recalls the corresponding motor command from the memory.", "In this paper we present a new method to provide situation awareness via the automatic recognition of behaviour in video. In contrast to many other approaches, the presented method does not require many training exemplars. We introduce Probabilistic Behaviour Signatures to represent the goals of a person agent as sets of features. We do not assume temporal ordering of observed actions is necessary. Inference is performed using an extension of the Rao-Blackwellised Particle Filter. We validate our approach using simulated image trajectories which represent three high-level behaviours. We compare performance to a trained Hidden Markov Model Particle Filter (HMM PF) and show that our approach achieves 92 accuracy at video frame rate. Our method is also significantly more robust than the HMM PF in the presence of noise.", "This paper presents a novel method for learning object manipulation such as rotating an object or placing one object on another. In this method, motions are learned using reference-point-dependent probabilistic models, which can be used for the generation and recognition of motions. The method estimates (i) the reference point, (ii) the intrinsic coordinate system type, which is the type of coordinate system intrinsic to a motion, and (iii) the probabilistic model parameters of the motion that is considered in the intrinsic coordinate system. Motion trajectories are modeled by a hidden Markov model (HMM), and an HMM-based method using static and dynamic features is used for trajectory generation. The method was evaluated in physical experiments in terms of motion generation and recognition. In the experiments, users demonstrated the manipulation of puppets and toys so that the motions could be learned. A recognition accuracy of 90 was obtained for a test set of motions performed by three subjects. Furthe...", "Obtaining accurate hidden Markov model (HMM) state sequences is a research challenge to warrant good system performance in particle filter (PF) compensation for noisy speech recognition. Instead of using specific knowledge at the model and state levels which is hard to estimate, we pool model states into clusters as side information. Since each cluster encompasses more statistics when compared to the original HMM states, there is a higher possibility that the newly formed probability density function at the cluster level can cover the underlying speech variation to generate appropriate PF samples for feature compensation. Testing the proposed PF-based compensation scheme on the Aurora 2 connected digit recognition task, we achieve an error reduction of 12.15 from the best multi-condition trained models using this integrated PF-HMM framework to estimate the cluster-based HMM state sequence information." ] }
1904.08761
2935776047
Differently from animals, robots can record its experience correctly for long time. We propose a novel algorithm that runs a particle filter on the time sequence of the experience. It can be applied to some teach-and-replay tasks. In a task, the trainer controls a robot, and the robot records its sensor readings and its actions. We name the sequence of the record an episode, which is derived from the episodic memory of animals. After that, the robot executes the particle filter so as to find a similar situation with the current one from the episode. If the robot chooses the action taken in the similar situation, it can replay the taught behavior. We name this algorithm the particle filter on episode (PFoE). The robot with PFoE shows not only a simple replay of a behavior but also recovery motion from skids and interruption. In this paper, we evaluate the properties of PFoE with a small mobile robot.
In @cite_14 , we have proposed another PFoE as an algorithm for reinforcement learning. Possibly we will unify this PFoE and the PFoE in this paper in future. However, they are different with each other in this stage.
{ "cite_N": [ "@cite_14" ], "mid": [ "2586049587" ], "abstract": [ "We propose a novel method, a particle filter on episode, for decision makings of agents in the real world. This method is used for simulating behavioral experiments of rodents as a workable model, and for decision making of actual robots. Recent studies on neuroscience suggest that hippocampus and its surroundings in brains of mammals are related to solve navigation problems, which are also essential in robotics. The hippocampus also handle memories and some parts of a brain utilize them for decision. The particle filter gives a calculation model of decision making based on memories. In this paper, we have verified that this method learns two kinds of tasks that have been frequently examined in behavioral experiments of rodents. Though the tasks have been different in character from each other, the algorithm has been able to make an actual robot take appropriate behavior in the both tasks with an identical parameter set." ] }
1904.08761
2935776047
Differently from animals, robots can record its experience correctly for long time. We propose a novel algorithm that runs a particle filter on the time sequence of the experience. It can be applied to some teach-and-replay tasks. In a task, the trainer controls a robot, and the robot records its sensor readings and its actions. We name the sequence of the record an episode, which is derived from the episodic memory of animals. After that, the robot executes the particle filter so as to find a similar situation with the current one from the episode. If the robot chooses the action taken in the similar situation, it can replay the taught behavior. We name this algorithm the particle filter on episode (PFoE). The robot with PFoE shows not only a simple replay of a behavior but also recovery motion from skids and interruption. In this paper, we evaluate the properties of PFoE with a small mobile robot.
Differently from recurrent neural network (RNN) @cite_2 , long short-term memory (LSTM) @cite_20 @cite_6 , and other neural network architectures, PFoE does not use back-propagation learning. This is a difference between PFoE and them. More than that, it is important that a particle filter has an ability of teach-and-replay. Though it can only be applied to some easy tasks at this time, the phenomena caused by PFoE should be reported as a new discovery. Combinations of the idea of PFoE and some state-of-art methods will be interesting topics in future.
{ "cite_N": [ "@cite_6", "@cite_20", "@cite_2" ], "mid": [ "1531333757", "", "1964502429" ], "abstract": [ "In this paper, we apply bidirectional training to a long short term memory (LSTM) network for the first time. We also present a modified, full gradient version of the LSTM learning algorithm. We discuss the significance of framewise phoneme classification to continuous speech recognition, and the validity of using bidirectional networks for online causal tasks. On the TIMIT speech database, we measure the framewise phoneme classification scores of bidirectional and unidirectional variants of both LSTM and conventional recurrent neural networks (RNNs). We find that bidirectional LSTM outperforms both RNNs and unidirectional LSTM.", "", "Tying suture knots is a time-consuming task performed frequently during Minimally Invasive Surgery (MIS). Automating this task could greatly reduce total surgery time for patients. Current solutions to this problem replay manually programmed trajectories, but a more general and robust approach is to use supervised machine learning to smooth surgeon-given training trajectories and generalize from them. Since knottying generally requires a controller with internal memory to distinguish between identical inputs that require different actions at different points along a trajectory, it would be impossible to teach the system using traditional feedforward neural nets or support vector machines. Instead we exploit more powerful, recurrent neural networks (RNNs) with adaptive internal states. Results obtained using LSTM RNNs trained by the recent Evolino algorithm show that this approach can significantly increase the efficiency of suture knot tying in MIS over preprogrammed control." ] }
1904.08668
2972019369
The application of the diffusion in many computer vision and artificial intelligence projects has been shown to give excellent improvements in performance. One of the main bottlenecks of this technique is the quadratic growth of the kNN graph size due to the high-quantity of new connections between nodes in the graph, resulting in long computation times. Several strategies have been proposed to address this, but none are effective and efficient. Our novel technique, based on LSH projections, obtains the same performance as the exact kNN graph after diffusion, but in less time (approximately 18 times faster on a dataset of a hundred thousand images). The proposed method was validated and compared with other state-of-the-art on several public image datasets, including Oxford5k, Paris6k, and Oxford105k.
Graphs have been used for different tasks in computer vision: applying diffusion for image retrieval @cite_17 , unsupervised fine-tuning @cite_23 , propagating labels for semi-supervised learning @cite_19 , creating manifold embeddings @cite_1 , and training classifiers exploiting the cycles found in the graph @cite_13 .
{ "cite_N": [ "@cite_1", "@cite_19", "@cite_23", "@cite_13", "@cite_17" ], "mid": [ "2963694449", "2623167780", "2795246076", "2518754566", "2559091987" ], "abstract": [ "Existing manifold learning methods are not appropriate for image retrieval tasks, because most of them are unable to process query images and they have much greater computational cost especially for large-scale database. Therefore, we propose the iterative manifold embedding (IME) layer, of which the weights are learned offline by an unsupervised strategy, to explore the intrinsic manifolds by incomplete data. On the large-scale database that contains 27 000 images, the IME layer is more than 120 times faster than other manifold learning methods to embed the original representations at query time. We embed the original descriptors of database images that lie on manifold in a high-dimensional space into manifold-based representations iteratively to generate the IME representations in an offline learning stage. According to the original descriptors and the IME representations of database images, we estimate the weights of the IME layer by ridge regression. In the online retrieval stage, we employ the IME layer to map the original representation of a query image with an ignorable time cost (2 ms per image). We experiment on five public standard datasets for image retrieval. The proposed IME layer significantly outperforms the related dimension reduction methods and manifold learning methods. Without postprocessing, our IME layer achieves a boost in the performance of state-of-the-art image retrieval methods with postprocessing on most datasets, and needs less computational cost. The code is available at https: github.com XJhaoren IME_layer.", "This paper considers the problem of inferring image labels for which only a few labelled examples are available at training time. This setup is often referred to as low-shot learning in the literature, where a standard approach is to re-train the last few layers of a convolutional neural network learned on separate classes. We consider a semi-supervised setting in which we exploit a large collection of images to support label propagation. This is made possible by leveraging the recent advances on large-scale similarity graph construction. We show that despite its conceptual simplicity, scaling up label propagation to up hundred millions of images leads to state of the art accuracy in the low-shot learning regime.", "In this work we present a novel unsupervised framework for hard training example mining. The only input to the method is a collection of images relevant to the target application and a meaningful initial representation, provided e.g. by pre-trained CNN. Positive examples are distant points on a single manifold, while negative examples are nearby points on different manifolds. Both types of examples are revealed by disagreements between Euclidean and manifold similarities. The discovered examples can be used in training with any discriminative loss. The method is applied to unsupervised fine-tuning of pre-trained networks for fine-grained classification and particular object retrieval. Our models are on par or are outperforming prior models that are fully or partially supervised.", "Learning rich visual representations often require training on datasets of millions of manually annotated examples. This substantially limits the scalability of learning effective representations as labeled data is expensive or scarce. In this paper, we address the problem of unsupervised visual representation learning from a large, unlabeled collection of images. By representing each image as a node and each nearest-neighbor matching pair as an edge, our key idea is to leverage graph-based analysis to discover positive and negative image pairs (i.e., pairs belonging to the same and different visual categories). Specifically, we propose to use a cycle consistency criterion for mining positive pairs and geodesic distance in the graph for hard negative mining. We show that the mined positive and negative image pairs can provide accurate supervisory signals for learning effective representations using Convolutional Neural Networks (CNNs). We demonstrate the effectiveness of the proposed unsupervised constraint mining method in two settings: (1) unsupervised feature learning and (2) semi-supervised learning. For unsupervised feature learning, we obtain competitive performance with several state-of-the-art approaches on the PASCAL VOC 2007 dataset. For semi-supervised learning, we show boosted performance by incorporating the mined constraints on three image classification datasets.", "Query expansion is a popular method to improve the quality of image retrieval with both conventional and CNN representations. It has been so far limited to global image similarity. This work focuses on diffusion, a mechanism that captures the image manifold in the feature space. An efficient off-line stage allows optional reduction in the number of stored regions. In the on-line stage, the proposed handling of unseen queries in the indexing stage removes additional computation to adjust the precomputed data. We perform diffusion through a sparse linear system solver, yielding practical query times well below one second. Experimentally, we observe a significant boost in performance of image retrieval with compact CNN descriptors on standard benchmarks, especially when the query object covers only a small part of the image. Small objects have been a common failure case of CNN-based retrieval." ] }
1904.08668
2972019369
The application of the diffusion in many computer vision and artificial intelligence projects has been shown to give excellent improvements in performance. One of the main bottlenecks of this technique is the quadratic growth of the kNN graph size due to the high-quantity of new connections between nodes in the graph, resulting in long computation times. Several strategies have been proposed to address this, but none are effective and efficient. Our novel technique, based on LSH projections, obtains the same performance as the exact kNN graph after diffusion, but in less time (approximately 18 times faster on a dataset of a hundred thousand images). The proposed method was validated and compared with other state-of-the-art on several public image datasets, including Oxford5k, Paris6k, and Oxford105k.
To speed up the process while retaining good retrieval accuracy, approximated kNN graphs have been used. The methods for the construction of the approximate kNN graph can be divided in two strategies: methods following the strategy of divide and conquer, and methods using local search (e.g., NN-descent @cite_10 ). Divide-and-conquer methods generally consist of two stages: subdividing the dataset in parts (divide), and creating a kNN graph for each sample followed by merging all subgraphs to form the final kNN graph (conquer).
{ "cite_N": [ "@cite_10" ], "mid": [ "2110026675" ], "abstract": [ "K-Nearest Neighbor Graph (K-NNG) construction is an important operation with many web related applications, including collaborative filtering, similarity search, and many others in data mining and machine learning. Existing methods for K-NNG construction either do not scale, or are specific to certain similarity measures. We present NN-Descent, a simple yet efficient algorithm for approximate K-NNG construction with arbitrary similarity measures. Our method is based on local search, has minimal space overhead and does not rely on any shared global index. Hence, it is especially suitable for large-scale applications where data structures need to be distributed over the network. We have shown with a variety of datasets and similarity measures that the proposed method typically converges to above 90 recall with each point comparing only to several percent of the whole dataset on average." ] }
1904.08621
2937839548
TAMER has proven to be a powerful interactive reinforcement learning method for allowing ordinary people to teach and personalize autonomous agents' behavior by providing evaluative feedback. However, a TAMER agent planning with UCT---a Monte Carlo Tree Search strategy, can only update states along its path and might induce high learning cost especially for a physical robot. In this paper, we propose to drive the agent's exploration along the optimal path and reduce the learning cost by initializing the agent's reward function via inverse reinforcement learning from demonstration. We test our proposed method in the RL benchmark domain---Grid World---with different discounts on human reward. Our results show that learning from demonstration can allow a TAMER agent to learn a roughly optimal policy up to the deepest search and encourage the agent to explore along the optimal path. In addition, we find that learning from demonstration can improve the learning efficiency by reducing total feedback, the number of incorrect actions and increasing the ratio of correct actions to obtain an optimal policy, allowing a TAMER agent to converge faster.
In learning from demonstration, the agent learns from sequences of state-action pairs provided by a human trainer who demonstrates the desired behavior @cite_15 . For example, @cite_18 is a form of learning from demonstration, which learns how to perform a task using @cite_9 from observations of the behavior demonstrated by an expert teacher. @cite_4 proposed a method wherein the agent learns from both demonstrations and the trainer's critiques of the agent's task performance, which is quite related to our work in this paper. However, our work differs in allowing the human trainer to provide human rewards --- evaluations of the quality of the agent's action --- to fine-tune the agent's behavior, while in their work only the critiques of the whole task's performance were provided.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_18", "@cite_4" ], "mid": [ "2061562262", "1986014385", "1999874108", "2154633587" ], "abstract": [ "Objective—To evaluate the pharmacokinetics of a novel commercial formulation of ivermectin after administration to goats. Animals—6 healthy adult goats. Procedure—Ivermectin (200 μg kg) was initially administered IV to each goat, and plasma samples were obtained for 36 days. After a washout period of 3 weeks, each goat received a novel commercial formulation of ivermectin (200 μg kg) by SC injection. Plasma samples were then obtained for 42 days. Drug concentrations were quantified by use of high-performance liquid chromatography with fluorescence detection. Results—Pharmacokinetics of ivermectin after IV administration were best described by a 2-compartment open model; values for main compartmental variables included volume of distribution at a steady state (9.94 L kg), clearance (1.54 L kg d), and area under the plasma concentration-time curve (AUC; 143 [ng•d] mL). Values for the noncompartmental variables included mean residence time (7.37 days), AUC (153 [ng•d] mL), and clearance (1.43 L kg d). After ...", "We present a comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings. We introduce the LfD design choices in terms of demonstrator, problem space, policy derivation and performance, and contribute the foundations for a structure in which to categorize LfD research. Specifically, we analyze and categorize the multiple ways in which examples are gathered, ranging from teleoperation to imitation, as well as the various techniques for policy derivation, including matching functions, dynamics models and plans. To conclude we discuss LfD limitations and related promising areas for future research.", "We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using \"inverse reinforcement learning\" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.", "Learning by demonstration can be a powerful and natural tool for developing robot control policies. That is, instead of tedious hand-coding, a robot may learn a control policy by interacting with a teacher. In this work we present an algorithm for learning by demonstration in which the teacher operates in two phases. The teacher first demonstrates the task to the learner. The teacher next critiques learner performance of the task. This critique is used by the learner to update its control policy. In our implementation we utilize a 1-Nearest Neighbor technique which incorporates both training dataset and teacher critique. Since the teacher critiques performance only, they do not need to guess at an effective critique for the underlying algorithm. We argue that this method is particularly well-suited to human teachers, who are generally better at assigning credit to performances than to algorithms. We have applied this algorithm to the simulated task of a robot intercepting a ball. Our results demonstrate improved performance with teacher critiquing, where performance is measured by both execution success and efficiency." ] }
1904.08621
2937839548
TAMER has proven to be a powerful interactive reinforcement learning method for allowing ordinary people to teach and personalize autonomous agents' behavior by providing evaluative feedback. However, a TAMER agent planning with UCT---a Monte Carlo Tree Search strategy, can only update states along its path and might induce high learning cost especially for a physical robot. In this paper, we propose to drive the agent's exploration along the optimal path and reduce the learning cost by initializing the agent's reward function via inverse reinforcement learning from demonstration. We test our proposed method in the RL benchmark domain---Grid World---with different discounts on human reward. Our results show that learning from demonstration can allow a TAMER agent to learn a roughly optimal policy up to the deepest search and encourage the agent to explore along the optimal path. In addition, we find that learning from demonstration can improve the learning efficiency by reducing total feedback, the number of incorrect actions and increasing the ratio of correct actions to obtain an optimal policy, allowing a TAMER agent to converge faster.
The work of @cite_8 is most related to our work in this paper. Specifically, they used a specified shaping reward function to improve the learning efficiency of learning from demonstration. However, our work differs by allowing the shaping reward to be provided by a human trainer not pre-defined potential function by the agent designer. In addition, @cite_0 proposed a method for speeding up reinforcement agent learning from environmental rewards by reward shaping via a learned potential function from demonstrations. While in our work, we used demonstrations to seed the agent's learning from human-generated rewards.
{ "cite_N": [ "@cite_0", "@cite_8" ], "mid": [ "2397581010", "158183001" ], "abstract": [ "Reinforcement learning describes how a learning agent can achieve optimal behaviour based on interactions with its environment and reward feedback. A limiting factor in reinforcement learning as employed in artificial intelligence is the need for an often prohibitively large number of environment samples before the agent reaches a desirable level of performance. Learning from demonstration is an approach that provides the agent with demonstrations by a supposed expert, from which it should derive suitable behaviour. Yet, one of the challenges of learning from demonstration is that no guarantees can be provided for the quality of the demonstrations, and thus the learned behavior. In this paper, we investigate the intersection of these two approaches, leveraging the theoretical guarantees provided by reinforcement learning, and using expert demonstrations to speed up this learning by biasing exploration through a process called reward shaping. This approach allows us to leverage human input without making an erroneous assumption regarding demonstration optimality. We show experimentally that this approach requires significantly fewer demonstrations, is more robust against suboptimality of demonstrations, and achieves much faster learning than the recently developed HAT algorithm.", "Imitation Learning (IL) is a popular approach for teaching behavior policies to agents by demonstrating the desired target policy. While the approach has lead to many successes, IL often requires a large set of demonstrations to achieve robust learning, which can be expensive for the teacher. In this paper, we consider a novel approach to improve the learning efficiency of IL by providing a shaping reward function in addition to the usual demonstrations. Shaping rewards are numeric functions of states (and possibly actions) that are generally easily specified, and capture general principles of desired behavior, without necessarily completely specifying the behavior. Shaping rewards have been used extensively in reinforcement learning, but have been seldom considered for IL, though they are often easy to specify. Our main contribution is to propose an IL approach that learns from both shaping rewards and demonstrations. We demonstrate the effectiveness of the approach across several IL problems, even when the shaping reward is not fully consistent with the demonstrations." ] }
1904.08610
2939026564
Segmentation is a key step in analyzing and processing medical images. Due to the low fault tolerance in medical imaging, manual segmentation remains the de facto standard in this domain. Besides, efforts to automate the segmentation process often rely on large amounts of manually labeled data. While existing software supporting manual segmentation is rich in features and delivers accurate results, the necessary time to set it up and get comfortable using it can pose a hurdle for the collection of large datasets. This work introduces a client server based online environment, referred to as Studierfenster (this http URL), that can be used to perform manual segmentations directly in a web browser. The aim of providing this functionality in the form of a web application is to ease the collection of ground truth segmentation datasets. Providing a tool that is quickly accessible and usable on a broad range of devices, offers the potential to accelerate this process. The manual segmentation workflow of Studierfenster consists of dragging and dropping the input file into the browser window and slice-by-slice outlining the object under consideration. The final segmentation can then be exported as a file storing its contours and as a binary segmentation mask. In order to evaluate the usability of Studierfenster, a user study was performed. The user study resulted in a mean of 6.3 out of 7.0 possible points given by users, when asked about their overall impression of the tool. The evaluation also provides insights into the results achievable with the tool in practice, by presenting two ground truth segmentations performed by physicians.
´ There is a wide range of offline software tools available that offer sophisticated medical image analysis and processing capabilities, including tools for manual segmentation. Examples include 3D Slicer @cite_4 and MeVisLab @cite_10 , @cite_5 . The drawback of these tools is that they require a local installation and are thus not readily available on any device.
{ "cite_N": [ "@cite_10", "@cite_5", "@cite_4" ], "mid": [ "1562754957", "2602830123", "2166036944" ], "abstract": [ "We present the integration of the OpenIGTLink network protocol for image-guided therapy (IGT) with the medical prototyping platform MeVisLab. OpenIGTLink is a new, open, simple and extensible network communication protocol for IGT. The protocol provides a standardized mechanism to connect hardware and software by the transfer of coordinate transforms, images, and status messages. MeVisLab is a framework for the development of image processing algorithms and visualization and interaction methods, with a focus on medical imaging. The integration of OpenIGTLink into MeVisLab has been realized by developing a software module using the C++ programming language. As a result, researchers using MeVisLab can interface their software to hardware devices that already support the OpenIGTLink protocol, such as the NDI Aurora magnetic tracking system. In addition, the OpenIGTLink module can also be used to communicate directly with Slicer, a free, open source software package for visualization and image analysis. The integration has been tested with tracker clients available online and a real tracking system. Background—OpenIGTLink is a new, open, simple and extensible network communication protocol for image-guided therapy (IGT). The protocol provides a standardized mechanism to connect hardware and software by the transfer of coordinate transforms, images, and status messages. MeVisLab is a framework for the development of image processing algorithms and visualization and interaction methods, with a focus on medical imaging.", "Virtual Reality, an immersive technology that replicates an environment via computer-simulated reality, gets a lot of attention in the entertainment industry. However, VR has also great potential in other areas, like the medical domain, Examples are intervention planning, training and simulation. This is especially of use in medical operations, where an aesthetic outcome is important, like for facial surgeries. Alas, importing medical data into Virtual Reality devices is not necessarily trivial, in particular, when a direct connection to a proprietary application is desired. Moreover, most researcher do not build their medical applications from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These platforms have in common that they use libraries like ITK and VTK, and provide a convenient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In this study, the usage of a Virtual Reality device for medical data under the MeVisLab platform is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab platform. Medical data coming from other MeVisLab modules can directly be connected per drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for immersive virtual reality inspection.", "Volumetric change in glioblastoma multiforme (GBM) over time is a critical factor in treatment decisions. Typically, the tumor volume is computed on a slice-by-slice basis using MRI scans obtained at regular intervals. (3D)Slicer – a free platform for biomedical research – provides an alternative to this manual slice-by-slice segmentation process, which is significantly faster and requires less user interaction. In this study, 4 physicians segmented GBMs in 10 patients, once using the competitive region-growing based GrowCut segmentation module of Slicer, and once purely by drawing boundaries completely manually on a slice-by-slice basis. Furthermore, we provide a variability analysis for three physicians for 12 GBMs. The time required for GrowCut segmentation was on an average 61 of the time required for a pure manual segmentation. A comparison of Slicer-based segmentation with manual slice-by-slice segmentation resulted in a Dice Similarity Coefficient of 88.43 ± 5.23 and a Hausdorff Distance of 2.32 ± 5.23 mm." ] }
1904.08739
2937549930
Existing state-of-the-art salient object detection networks rely on aggregating multi-level features of pre-trained convolutional neural networks (CNNs). Compared to high-level features, low-level features contribute less to performance but cost more computations because of their larger spatial resolutions. In this paper, we propose a novel Cascaded Partial Decoder (CPD) framework for fast and accurate salient object detection. On the one hand, the framework constructs partial decoder which discards larger resolution features of shallower layers for acceleration. On the other hand, we observe that integrating features of deeper layers obtain relatively precise saliency map. Therefore we directly utilize generated saliency map to refine the features of backbone network. This strategy efficiently suppresses distractors in the features and significantly improves their representation ability. Experiments conducted on five benchmark datasets exhibit that the proposed model not only achieves state-of-the-art performance but also runs much faster than existing models. Besides, the proposed framework is further applied to improve existing multi-level feature aggregation models and significantly improve their efficiency and accuracy.
Over the past two decades, researchers have developed a large amount of saliency detection algorithms. Traditional models extract hand-crafted features and are based on various saliency assumptions @cite_19 @cite_44 @cite_42 @cite_29 . More details about traditional methods are concluded in @cite_14 @cite_23 . Here we mainly discuss deep learning based saliency detection models.
{ "cite_N": [ "@cite_14", "@cite_29", "@cite_42", "@cite_44", "@cite_19", "@cite_23" ], "mid": [ "", "2047670868", "2128272608", "2037954058", "2100470808", "2160613239" ], "abstract": [ "", "Recent progresses in salient object detection have exploited the boundary prior, or background information, to assist other saliency cues such as contrast, achieving state-of-the-art results. However, their usage of boundary prior is very simple, fragile, and the integration with other cues is mostly heuristic. In this work, we present new methods to address these issues. First, we propose a robust background measure, called boundary connectivity. It characterizes the spatial layout of image regions with respect to image boundaries and is much more robust. It has an intuitive geometrical interpretation and presents unique benefits that are absent in previous saliency measures. Second, we propose a principled optimization framework to integrate multiple low level cues, including our background measure, to obtain clean and uniform saliency maps. Our formulation is intuitive, efficient and achieves state-of-the-art results on several benchmark datasets.", "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail.", "Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.", "Detection of visually salient image regions is useful for applications like object segmentation, adaptive compression, and object recognition. In this paper, we introduce a method for salient region detection that outputs full resolution saliency maps with well-defined boundaries of salient objects. These boundaries are preserved by retaining substantially more frequency content from the original image than other existing techniques. Our method exploits features of color and luminance, is simple to implement, and is computationally efficient. We compare our algorithm to five state-of-the-art salient region detection methods with a frequency domain analysis, ground truth, and a salient object segmentation application. Our method outperforms the five algorithms both on the ground-truth evaluation and on the segmentation task by achieving both higher precision and better recall.", "Detecting and segmenting salient objects from natural scenes, often referred to as salient object detection, has attracted great interest in computer vision. While many models have been proposed and several applications have emerged, a deep understanding of achievements and issues remains lacking. We aim to provide a comprehensive review of recent progress in salient object detection and situate this field among other closely related areas such as generic scene segmentation, object proposal generation, and saliency for fixation prediction. Covering 228 publications, we survey i) roots, key concepts, and tasks, ii) core techniques and main modeling trends, and iii) datasets and evaluation metrics for salient object detection. We also discuss open problems such as evaluation metrics and dataset bias in model performance, and suggest future research directions." ] }
1904.08739
2937549930
Existing state-of-the-art salient object detection networks rely on aggregating multi-level features of pre-trained convolutional neural networks (CNNs). Compared to high-level features, low-level features contribute less to performance but cost more computations because of their larger spatial resolutions. In this paper, we propose a novel Cascaded Partial Decoder (CPD) framework for fast and accurate salient object detection. On the one hand, the framework constructs partial decoder which discards larger resolution features of shallower layers for acceleration. On the other hand, we observe that integrating features of deeper layers obtain relatively precise saliency map. Therefore we directly utilize generated saliency map to refine the features of backbone network. This strategy efficiently suppresses distractors in the features and significantly improves their representation ability. Experiments conducted on five benchmark datasets exhibit that the proposed model not only achieves state-of-the-art performance but also runs much faster than existing models. Besides, the proposed framework is further applied to improve existing multi-level feature aggregation models and significantly improve their efficiency and accuracy.
Early works utilize CNNs to determine whether image regions are salient or not @cite_37 @cite_22 @cite_26 @cite_32 . Although these models have achieved much better performance than traditional methods, it is time-consuming to predict saliency scores for image regions. Then researchers develop more effective models based on the successful fully convolutional network @cite_33 . Li @cite_27 set up a unified framework for salient object detection and semantic segmentation to effectively learn the semantic properties of salient objects. Wang @cite_30 leverage cascaded fully convolutional networks to continuously refine previous prediction maps.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_26", "@cite_22", "@cite_33", "@cite_32", "@cite_27" ], "mid": [ "2519528544", "2338972621", "1947031653", "1894057436", "1903029394", "1942214758", "2147347517" ], "abstract": [ "Deep networks have been proved to encode high level semantic features and delivered superior performance in saliency detection. In this paper, we go one step further by developing a new saliency model using recurrent fully convolutional networks (RFCNs). Compared with existing deep network based methods, the proposed network is able to incorporate saliency prior knowledge for more accurate inference. In addition, the recurrent architecture enables our method to automatically learn to refine the saliency map by correcting its previous errors. To train such a network with numerous parameters, we propose a pre-training strategy using semantic segmentation data, which simultaneously leverages the strong supervision of segmentation tasks for better training and enables the network to capture generic representations of objects for saliency detection. Through extensive experimental evaluations, we demonstrate that the proposed method compares favorably against state-of-the-art approaches, and that the proposed recurrent deep model as well as the pre-training method can significantly improve performance.", "Recent advances in saliency detection have utilized deep learning to obtain high level features to detect salient regions in a scene. These advances have demonstrated superior results over previous works that utilize hand-crafted low level features for saliency detection. In this paper, we demonstrate that hand-crafted features can provide complementary information to enhance performance of saliency detection that utilizes only high level features. Our method utilizes both high level and low level features for saliency detection under a unified deep learning framework. The high level features are extracted using the VGG-net, and the low level features are compared with other parts of an image to form a low level distance map. The low level distance map is then encoded using a convolutional neural network(CNN) with multiple 1 1 convolutional and ReLU layers. We concatenate the encoded low level distance map and the high level features, and connect them to a fully connected neural network classifier to evaluate the saliency of a query region. Our experiments show that our method can further improve the performance of stateof-the-art deep learning-based saliency detection methods.", "This paper presents a saliency detection algorithm by integrating both local estimation and global search. In the local estimation stage, we detect local saliency by using a deep neural network (DNN-L) which learns local patch features to determine the saliency value of each pixel. The estimated local saliency maps are further refined by exploring the high level object concepts. In the global search stage, the local saliency map together with global contrast and geometric information are used as global features to describe a set of object candidate regions. Another deep neural network (DNN-G) is trained to predict the saliency score of each object region based on the global features. The final saliency map is generated by a weighted sum of salient object regions. Our method presents two interesting insights. First, local features learned by a supervised scheme can effectively capture local contrast, texture and shape information for saliency detection. Second, the complex relationship between different global saliency cues can be captured by deep networks and exploited principally rather than heuristically. Quantitative and qualitative experiments on several benchmark data sets demonstrate that our algorithm performs favorably against the state-of-the-art methods.", "Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. We then propose a refinement method to enhance the spatial coherence of our saliency results. Finally, aggregating multiple saliency maps computed for different levels of image segmentation can further boost the performance, yielding saliency maps better than those generated from a single segmentation. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks, improving the F-Measure by 5.0 and 13.2 respectively on the MSRA-B dataset and our new dataset (HKU-IS), and lowering the mean absolute error by 5.7 and 35.1 respectively on these two datasets.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.", "A key problem in salient object detection is how to effectively model the semantic properties of salient objects in a data-driven manner. In this paper, we propose a multi-task deep saliency model based on a fully convolutional neural network with global input (whole raw images) and global output (whole saliency maps). In principle, the proposed saliency model takes a data-driven strategy for encoding the underlying saliency prior information, and then sets up a multi-task learning scheme for exploring the intrinsic correlations between saliency detection and semantic image segmentation. Through collaborative feature learning from such two correlated tasks, the shared fully convolutional layers produce effective features for object perception. Moreover, it is capable of capturing the semantic information on salient objects across different levels using the fully convolutional layers, which investigate the feature-sharing properties of salient object detection with a great reduction of feature redundancy. Finally, we present a graph Laplacian regularized nonlinear regression model for saliency refinement. Experimental results demonstrate the effectiveness of our approach in comparison with the state-of-the-art approaches." ] }
1904.08537
2938487313
Material recognition methods use image context and local cues for pixel-wise classification. In many cases only a single image is available to make a material prediction. Image sequences, routinely acquired in applications such as mutliview stereo, can provide a sampling of the underlying reflectance functions that reveal pixel-level material attributes. We investigate multi-view material segmentation using two datasets generated for building material segmentation and scene material segmentation from the SpaceNet Challenge satellite image dataset. In this paper, we explore the impact of multi-angle reflectance information by introducing the , which captures both the multi-angle and multispectral information present in our datasets. The residuals are computed by differencing the sparse-sampled reflectance function with a dictionary of pre-defined dense-sampled reflectance functions. Our proposed reflectance residual features improves material segmentation performance when integrated into pixel-wise and semantic segmentation architectures. At test time, predictions from individual segmentations are combined through softmax fusion and refined by building segment voting. We demonstrate robust and accurate pixelwise segmentation results using the proposed material segmentation pipeline.
Multi-view semantic segmentation methods utilizing CNNs require each image to be projected into a consistent space. Examples of image space projection include image warping @cite_51 or point cloud generation @cite_0 @cite_42 . Images used in this work are warped such that the images have pixel consistency, i.e. a pixel coordinate corresponds to the same location in all images. @cite_51 perform image warping to achieve pixel correspondence and then aggregate individual image segmentations through Bayesian fusion and max-pooling of the last feature maps. Inspired by this work, we aggregate multiple individual image segmentations by fusing the outputs of the softmax layer. Multi-image aggregation techniques are found to be useful for smoothing noisy individual segmentations.
{ "cite_N": [ "@cite_0", "@cite_42", "@cite_51" ], "mid": [ "2560609797", "", "2604455318" ], "abstract": [ "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.", "", "Visual scene understanding is an important capability that enables robots to purposefully act in their environment. In this paper, we propose a novel deep neural network approach to predict semantic segmentation from RGB-D sequences. The key innovation is to train our network to predict multi-view consistent semantics in a self-supervised way. At test time, its semantics predictions can be fused more consistently in semantic keyframe maps than predictions of a network trained on individual views. We base our network architecture on a recent single-view deep learning approach to RGB and depth fusion for semantic object-class segmentation and enhance it with multi-scale loss minimization. We obtain the camera trajectory using RGB-D SLAM and warp the predictions of RGB-D images into ground-truth annotated frames in order to enforce multi-view consistency during training. At test time, predictions from multiple views are fused into keyframes. We propose and analyze several methods for enforcing multi-view consistency during training and testing. We evaluate the benefit of multi-view consistency training and demonstrate that pooling of deep features and fusion over multiple views outperforms single-view baselines on the NYUDv2 benchmark for semantic segmentation. Our end-to-end trained network achieves state-of-the-art performance on the NYUDv2 dataset in single-view segmentation as well as multi-view semantic fusion." ] }
1904.08486
2939906832
Recognition of defects in concrete infrastructure, especially in bridges, is a costly and time consuming crucial first step in the assessment of the structural integrity. Large variation in appearance of the concrete material, changing illumination and weather conditions, a variety of possible surface markings as well as the possibility for different types of defects to overlap, make it a challenging real-world task. In this work we introduce the novel COncrete DEfect BRidge IMage dataset (CODEBRIM) for multi-target classification of five commonly appearing concrete defects. We investigate and compare two reinforcement learning based meta-learning approaches, MetaQNN and efficient neural architecture search, to find suitable convolutional neural network architectures for this challenging multi-class multi-target task. We show that learned architectures have fewer overall parameters in addition to yielding better multi-target accuracy in comparison to popular neural architectures from the literature evaluated in the context of our application.
* Convolutional neural networks. A broader review of deep learning, its history and neural architecture innovations is given by LeCun al @cite_37 . We recall some CNN architectures that serve as baselines and give a frame of reference for architectures produced by meta-learning on our task. Alexnet @cite_11 had a large success on the ImageNet @cite_50 challenge that was later followed by a set of deeper architectures commonly referred to as VGG @cite_48 . Texture-CNN @cite_18 is an adapted version of the Alexnet design that includes an energy-based adaptive feature pooling and FV-CNN @cite_27 augments VGG with Fisher Vector pooling for texture classification. Recent works address information flow in deeper networks by adding skip connections with residual networks @cite_34 , wide residual networks (WRN) @cite_39 and densely connected networks (DenseNet) @cite_8 .
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_8", "@cite_48", "@cite_39", "@cite_27", "@cite_50", "@cite_34", "@cite_11" ], "mid": [ "", "2963934397", "2963446712", "2962835968", "2964137095", "1482080565", "2117539524", "2194775991", "2163605009" ], "abstract": [ "", "We adapt the CNN architecture to texture analysis.We introduce an energy layer to discard the overall shape information and focus on texture features.We evaluate the domain transferability and the depth of networks that are from scratch or pretrained.Our network is simpler than a classic CNN and obtains better classification results on texture datasets.We combine our texture CNN to a classic CNN (overall shape analysis) and further improve the results. Deep learning has established many new state of the art solutions in the last decade in areas such as object, scene and speech recognition. In particular Convolutional Neural Network (CNN) is a category of deep learning which obtains excellent results in object detection and recognition tasks. Its architecture is indeed well suited to object analysis by learning and classifying complex (deep) features that represent parts of an object or the object itself. However, some of its features are very similar to texture analysis methods. CNN layers can be thought of as filter banks of complexity increasing with the depth. Filter banks are powerful tools to extract texture features and have been widely used in texture analysis. In this paper we develop a simple network architecture named Texture CNN (T-CNN) which explores this observation. It is built on the idea that the overall shape information extracted by the fully connected layers of a classic CNN is of minor importance in texture analysis. Therefore, we pool an energy measure from the last convolution layer which we connect to a fully connected layer. We show that our approach can improve the performance of a network while greatly reducing the memory usage and computation.", "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.", "Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "", "Research in texture recognition often concentrates on the problem of material recognition in uncluttered conditions, an assumption rarely met by applications. In this work we conduct a first study of material and describable texture at- tributes recognition in clutter, using a new dataset derived from the OpenSurface texture repository. Motivated by the challenge posed by this problem, we propose a new texture descriptor, D-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank. D-CNN substantially improves the state-of-the-art in texture, mate- rial and scene recognition. Our approach achieves 82.3 accuracy on Flickr material dataset and 81.1 accuracy on MIT indoor scenes, providing absolute gains of more than 10 over existing approaches. D-CNN easily trans- fers across domains without requiring feature adaptation as for methods that build on the fully-connected layers of CNNs. Furthermore, D-CNN can seamlessly incorporate multi-scale information and describe regions of arbitrary shapes and sizes. Our approach is particularly suited at lo- calizing stuff categories and obtains state-of-the-art re- sults on MSRC segmentation dataset, as well as promising results on recognizing materials and surface attributes in clutter on the OpenSurfaces dataset.", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry." ] }
1904.08412
2604507718
Cluster analysis plays a very important role in data analysis. In these years, cluster ensemble, as a cluster analysis tool, has drawn much attention for its robustness, stability, and accuracy. Many efforts have been done to combine different initial clustering results into a single clustering solution with better performance. However, they neglect the structure information of the raw data in performing the cluster ensemble. In this paper, we propose a Structural Cluster Ensemble (SCE) algorithm for data partitioning formulated as a set-covering problem. In particular, we construct a Laplacian regularized objective function to capture the structure information among clusters. Moreover, considering the importance of the discriminative information underlying in the initial clustering results, we add a discriminative constraint into our proposed objective function. Finally, we verify the performance of the SCE algorithm on both synthetic and real data sets. The experimental results show the effectiveness of our proposed method SCE algorithm.
Cluster ensemble is an important method for improving clustering performance. Recently, many cluster ensemble algorithms have been proposed and applied for many areas including image segmentation @cite_30 , data mining @cite_23 and so on @cite_5 @cite_54 . Before reviewing the related work, we first give a brief description of cluster ensemble.
{ "cite_N": [ "@cite_30", "@cite_5", "@cite_54", "@cite_23" ], "mid": [ "2059216481", "2114979880", "", "1992419399" ], "abstract": [ "This paper describes a novel feature selection algorithm for unsupervised clustering, that combines the clustering ensembles method and the population based incremental learning algorithm. The main idea of the proposed unsupervised feature selection algorithm is to search for a subset of all features such that the clustering algorithm trained on this feature subset can achieve the most similar clustering solution to the one obtained by an ensemble learning algorithm. In particular, a clustering solution is firstly achieved by a clustering ensembles method, then the population based incremental learning algorithm is adopted to find the feature subset that best fits the obtained clustering solution. One advantage of the proposed unsupervised feature selection algorithm is that it is dimensionality-unbiased. In addition, the proposed unsupervised feature selection algorithm leverages the consensus across multiple clustering solutions. Experimental results on several real data sets demonstrate that the proposed unsupervised feature selection algorithm is often able to obtain a better feature subset when compared with other existing unsupervised feature selection algorithms.", "A resampling scheme for clustering with similarity to bootstrap aggregation (bagging) is presented. Bagging is used to improve the quality of path-based clustering, a data clustering method that can extract elongated structures from data in a noise robust way. The results of an agglomerative optimization method are influenced by small fluctuations of the input data. To increase the reliability of clustering solutions, a stochastic resampling method is developed to infer consensus clusters. A related reliability measure allows us to estimate the number of clusters, based on the stability of an optimized cluster solution under resampling. The quality of path-based clustering with resampling is evaluated on a large image data set of human segmentations.", "", "Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. However, clustering is a difficult problem combinatorially, and differences in assumptions and contexts in different communities has made the transfer of useful generic concepts and methodologies slow to occur. This paper presents an overviewof pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners. We present a taxonomy of clustering techniques, and identify cross-cutting themes and recent advances. We also describe some important applications of clustering algorithms such as image segmentation, object recognition, and information retrieval." ] }
1904.08412
2604507718
Cluster analysis plays a very important role in data analysis. In these years, cluster ensemble, as a cluster analysis tool, has drawn much attention for its robustness, stability, and accuracy. Many efforts have been done to combine different initial clustering results into a single clustering solution with better performance. However, they neglect the structure information of the raw data in performing the cluster ensemble. In this paper, we propose a Structural Cluster Ensemble (SCE) algorithm for data partitioning formulated as a set-covering problem. In particular, we construct a Laplacian regularized objective function to capture the structure information among clusters. Moreover, considering the importance of the discriminative information underlying in the initial clustering results, we add a discriminative constraint into our proposed objective function. Finally, we verify the performance of the SCE algorithm on both synthetic and real data sets. The experimental results show the effectiveness of our proposed method SCE algorithm.
The first step in cluster ensemble is to obtain the initial partitions. Many methods have been used to produce the base pool of the initial clustering results. In fact, they can be categorized into four subcategories, which are given below. Using different subsets or features of objects to produce different partitions @cite_24 . Projecting the data to subspaces, such as projecting to 1-dimension @cite_37 . Changing the initialization or the number of clusters of a clustering algorithm @cite_22 . Using different clustering algorithms to produce initial clustering results for combination @cite_62 .
{ "cite_N": [ "@cite_24", "@cite_37", "@cite_62", "@cite_22" ], "mid": [ "2036987424", "2139580617", "2107208924", "2042484140" ], "abstract": [ "It is widely recognized that combining multiple classification or regression models typically provides superior results compared to using a single, well-tuned model. However, there are no well known approaches to combining multiple non-hierarchical clusterings. The idea of combining cluster labelings without accessing the original features leads us to a general knowledge reuse framework that we call cluster ensembles. Our contribution in this paper is to formally define the cluster ensemble problem as an optimization problem and to propose three effective and efficient combiners for solving it based on a hypergraph model. Results on synthetic as well as real data sets are given to show that cluster ensembles can (i) improve quality and robustness, and (ii) enable distributed clustering.", "A data set can be clustered in many ways depending on the clustering algorithm employed, parameter settings used and other factors. Can multiple clusterings be combined so that the final partitioning of data provides better clustering? The answer depends on the quality of clusterings to be combined as well as the properties of the fusion method. First, we introduce a unified representation for multiple clusterings and formulate the corresponding categorical clustering problem. As a result, we show that the consensus function is related to the classical intra-class variance criterion using the generalized mutual information definition. Second, we show the efficacy of combining partitions generated by weak clustering algorithms that use data projections and random data splits. A simple explanatory model is offered for the behavior of combinations of such weak clustering components. We analyze the combination accuracy as a function of parameters controlling the power and resolution of component partitions as well as the learning dynamics vs. the number of clusterings involved. Finally, some empirical studies compare the effectiveness of several consensus functions.", "Motivation: The microarray technology is increasingly being applied in biological and medical research to address a wide range of problems such as the classification of tumors. An important statistical question associatedwith tumor classification is the identification of new tumor classes using gene expression profiles. Essential aspects of this clustering problem include identifying accurate partitions of the tumor samples into clusters and assessing the confidence of cluster assignments for individual samples. Results: Two new resampling methods, inspired from bagging in prediction, are proposed to improve and assess the accuracy of a given clustering procedure. In these ensemble methods, a partitioning clustering procedure is applied to bootstrap learning sets and the resulting multiple partitions are combined by voting or the creation of a new dissimilarity matrix. As in prediction, the motivation behind bagging is to reduce variability in the partitioning results via averaging. The performances of the new and existing methods were compared using simulated data and gene expression data from two recently published cancer microarray studies. The bagged clustering procedures were in general at least as accurate and often substantially more accurate than a single application of the partitioning clustering procedure. A valuable by-product of bagged clustering are the cluster votes which can be used to assess the confidence of cluster assignments for individual observations.", "We present a novel optimization-based method for the combination of cluster ensembles for the class of problems with intracluster criteria, such as Minimum-Sum-of-Squares-Clustering (MSSC). We propose a simple and efficient algorithm-called EXAMCE-for this class of problems that is inspired from a Set-Partitioning formulation of the original clustering problem. We prove some theoretical properties of the solutions produced by our algorithm, and in particular that, under general assumptions, though the algorithm recombines solution fragments so as to find the solution of a Set-Covering relaxation of the original formulation, it is guaranteed to find better solutions than the ones in the ensemble. For the MSSC problem in particular, a prototype implementation of our algorithm found a new better solution than the previously best known for 21 of the test instances of the 40-instance TSPLIB benchmark data sets used in, and and found a worse-quality solution than the best known only five times. For other published benchmark data sets where the optimal MSSC solution is known, we match them. The algorithm is particularly effective when the number of clusters is large, in which case it is able to escape the local minima found by K-means type algorithms by recombining the solutions in a Set-Covering context. We also establish the stability of the algorithm with extensive computational experiments, by showing that multiple runs of EXAMCE for the same clustering problem instance produce high-quality solutions whose Adjusted Rand Index is consistently above 0.95. Finally, in experiments utilizing external criteria to compute the validity of clustering, EXAMCE is capable of producing high-quality results that are comparable in quality to those of the best known clustering algorithms." ] }
1904.08412
2604507718
Cluster analysis plays a very important role in data analysis. In these years, cluster ensemble, as a cluster analysis tool, has drawn much attention for its robustness, stability, and accuracy. Many efforts have been done to combine different initial clustering results into a single clustering solution with better performance. However, they neglect the structure information of the raw data in performing the cluster ensemble. In this paper, we propose a Structural Cluster Ensemble (SCE) algorithm for data partitioning formulated as a set-covering problem. In particular, we construct a Laplacian regularized objective function to capture the structure information among clusters. Moreover, considering the importance of the discriminative information underlying in the initial clustering results, we add a discriminative constraint into our proposed objective function. Finally, we verify the performance of the SCE algorithm on both synthetic and real data sets. The experimental results show the effectiveness of our proposed method SCE algorithm.
@cite_18 defined category utility function to measure the similarity between two partitions. For two partitions @math and @math , the category utility function is given below: @math where @math , @math and @math . Then the corresponding consensus function is obtained by solving the following optimization problem. @math
{ "cite_N": [ "@cite_18" ], "mid": [ "2115346774" ], "abstract": [ "Clustering ensembles have emerged as a powerful method for improving both the robustness as well as the stability of unsupervised classification solutions. However, finding a consensus clustering from multiple partitions is a difficult problem that can be approached from graph-based, combinatorial, or statistical perspectives. This study extends previous research on clustering ensembles in several respects. First, we introduce a unified representation for multiple clusterings and formulate the corresponding categorical clustering problem. Second, we propose a probabilistic model of consensus using a finite mixture of multinomial distributions in a space of clusterings. A combined partition is found as a solution to the corresponding maximum-likelihood problem using the EM algorithm. Third, we define a new consensus function that is related to the classical intraclass variance criterion using the generalized mutual information definition. Finally, we demonstrate the efficacy of combining partitions generated by weak clustering algorithms that use data projections and random data splits. A simple explanatory model is offered for the behavior of combinations of such weak clustering components. Combination accuracy is analyzed as a function of several parameters that control the power and resolution of component partitions as well as the number of partitions. We also analyze clustering ensembles with incomplete information and the effect of missing cluster labels on the quality of overall consensus. Experimental results demonstrate the effectiveness of the proposed methods on several real-world data sets." ] }
1904.08412
2604507718
Cluster analysis plays a very important role in data analysis. In these years, cluster ensemble, as a cluster analysis tool, has drawn much attention for its robustness, stability, and accuracy. Many efforts have been done to combine different initial clustering results into a single clustering solution with better performance. However, they neglect the structure information of the raw data in performing the cluster ensemble. In this paper, we propose a Structural Cluster Ensemble (SCE) algorithm for data partitioning formulated as a set-covering problem. In particular, we construct a Laplacian regularized objective function to capture the structure information among clusters. Moreover, considering the importance of the discriminative information underlying in the initial clustering results, we add a discriminative constraint into our proposed objective function. Finally, we verify the performance of the SCE algorithm on both synthetic and real data sets. The experimental results show the effectiveness of our proposed method SCE algorithm.
@cite_1 used a K-means-based Consensus Clustering (KCC) utility function to combine different clustering results. Analoui and Sadighian @cite_61 have presented a probabilistic model by a finite mixture of multinomial distributions in a space of clustering, where the consensus partition is obtained as the solution of a maximum likelihood estimation problem. Besides these, Alizadeh proposed a cluster ensemble selection mechanism by using a new cluster stability measure which was named Alizadeh-Parvin-Moshki-Minaei criterion (APMM) @cite_29 . The framework of this work is to select as ensemble members a part of the initial partitions which satisfy APMM, and then find the final solution by using a co-association matrix based method.
{ "cite_N": [ "@cite_61", "@cite_29", "@cite_1" ], "mid": [ "1582617324", "1594519403", "" ], "abstract": [ "Clustering ensembles have emerged as a powerful method for improving both the robustness and the stability of unsupervised classification solutions. However, finding a consensus clustering from multiple partitions is a difficult problem that can be approached from graph-based, combinatorial or statistical perspectives. We offer a probabilistic model of consensus using a finite mixture of multinomial distributions in a space of clustering. A combined partition is found as a solution to the corresponding maximum likelihood problem using the GA algorithm. The excellent scalability of this algorithm and comprehensible underlying model are particularly important for clustering of large datasets. This study includes two sections, at the first, calculate correlation matrix. this matrix show correlation between samples and we found the best samples that can be in the center of clusters. In the other section a genetic algorithm is employed to produce the most stable partitions from an evolving ensemble (population) of clustering algorithms along with a special objective function. The objective function evaluates multiple partitions according to changes caused by data perturbations and prefers those clustering that are least susceptible to those perturbations.", "Many stability measures, such as Normalized Mutual Information NMI, have been proposed to validate a set of partitionings. It is highly possible that a set of partitionings may contain one or more high quality clusters but is still adjudged a bad cluster by a stability measure, and as a result, is completely neglected. Inspired by evaluation approaches measuring the efficacy of a set of partitionings, researchers have tried to define new measures for evaluating a cluster. Thus far, the measures defined for assessing a cluster are mostly based on the well-known NMI measure. The drawback of this commonly used approach is discussed in this paper, after which a new asymmetric criterion, called the Alizadeh--Parvin--Moshki--Minaei criterion APMM, is proposed to assess the association between a cluster and a set of partitionings. We show that the APMM criterion overcomes the deficiency in the conventional NMI measure. We also propose a clustering ensemble framework that incorporates the APMM's capabilities in order to find the best performing clusters. The framework uses Average APMM AAPMM as a fitness measure to select a number of clusters instead of using all of the results. Any cluster that satisfies a predefined threshold of the mentioned measure is selected to participate in an elite ensemble. To combine the chosen clusters, a co-association matrix-based consensus function by which the set of resultant partitionings are obtained is used. Because Evidence Accumulation Clustering EAC can not derive the co-association matrix from a subset of clusters appropriately, a new EAC-based method, called Extended EAC EEAC, is employed to construct the co-association matrix from the chosen subset of clusters. Empirical studies show that our proposed approach outperforms other cluster ensemble approaches.", "" ] }
1904.08412
2604507718
Cluster analysis plays a very important role in data analysis. In these years, cluster ensemble, as a cluster analysis tool, has drawn much attention for its robustness, stability, and accuracy. Many efforts have been done to combine different initial clustering results into a single clustering solution with better performance. However, they neglect the structure information of the raw data in performing the cluster ensemble. In this paper, we propose a Structural Cluster Ensemble (SCE) algorithm for data partitioning formulated as a set-covering problem. In particular, we construct a Laplacian regularized objective function to capture the structure information among clusters. Moreover, considering the importance of the discriminative information underlying in the initial clustering results, we add a discriminative constraint into our proposed objective function. Finally, we verify the performance of the SCE algorithm on both synthetic and real data sets. The experimental results show the effectiveness of our proposed method SCE algorithm.
Christou @cite_22 proposed the EXAct Method-based Cluster Ensemble (EXAMCE) algorithm based on the set partition problem which is direct to optimize the original objective function of cluster. It restricts the indictor matrix in the set partition problem consisting of the clusters in the base clustering results and converts it into a cluster ensemble problem. EXAMCE can guarantee its optimal solution is at least as good as any clustering results in the base clusterings. However, it just modifies the indictor matrix and overlooks the structural information underlying in the initial partitions. Besides, it neglects the discriminative information lying in the raw data.
{ "cite_N": [ "@cite_22" ], "mid": [ "2042484140" ], "abstract": [ "We present a novel optimization-based method for the combination of cluster ensembles for the class of problems with intracluster criteria, such as Minimum-Sum-of-Squares-Clustering (MSSC). We propose a simple and efficient algorithm-called EXAMCE-for this class of problems that is inspired from a Set-Partitioning formulation of the original clustering problem. We prove some theoretical properties of the solutions produced by our algorithm, and in particular that, under general assumptions, though the algorithm recombines solution fragments so as to find the solution of a Set-Covering relaxation of the original formulation, it is guaranteed to find better solutions than the ones in the ensemble. For the MSSC problem in particular, a prototype implementation of our algorithm found a new better solution than the previously best known for 21 of the test instances of the 40-instance TSPLIB benchmark data sets used in, and and found a worse-quality solution than the best known only five times. For other published benchmark data sets where the optimal MSSC solution is known, we match them. The algorithm is particularly effective when the number of clusters is large, in which case it is able to escape the local minima found by K-means type algorithms by recombining the solutions in a Set-Covering context. We also establish the stability of the algorithm with extensive computational experiments, by showing that multiple runs of EXAMCE for the same clustering problem instance produce high-quality solutions whose Adjusted Rand Index is consistently above 0.95. Finally, in experiments utilizing external criteria to compute the validity of clustering, EXAMCE is capable of producing high-quality results that are comparable in quality to those of the best known clustering algorithms." ] }
1904.08585
2971352265
Robustness and safety are crucial properties for the real-world application of autonomous vehicles. One of the most critical components of any autonomous system is localisation. During the last 20 years there has been significant progress in this area with the introduction of very efficient algorithms for mapping, localisation and SLAM. Many of these algorithms present impressive demonstrations for a particular domain, but fail to operate reliably with changes to the operating environment. The aspect of robustness has not received enough attention and localisation systems for self-driving vehicle applications are seldom evaluated for their robustness. In this paper we propose novel metrics to effectively quantify localisation robustness with or without an accurate ground truth. The experimental results present a comprehensive analysis of the application of these metrics against a number of well known localisation strategies.
@cite_13 proposed an algorithm for computing visual map quality. This method is able to predict localisation quality at a given map query point. It is not a metric capable of gauging robustness in experimental data and does not generalizes to many localisation systems and sensors. Map quality in @cite_13 is an indicator or prediction for localisation performance. The value varies with the parameterisation or weighting of the evaluation function, and its usage is restricted to visual landmark based localisation.
{ "cite_N": [ "@cite_13" ], "mid": [ "2737874330" ], "abstract": [ "A variety of end-user devices involving keypoint-based mapping systems are about to hit the market e.g. as part of smartphones, cars, robotic platforms, or virtual and augmented reality applications. Thus, the generated map data requires automated evaluation procedures that do not require experienced personnel or ground truth knowledge of the underlying environment. A particularly important question enabling commercial applications is whether a given map is of sufficient quality for localization. This paper proposes a framework for predicting localization performance in the context of visual landmark-based mapping. Specifically, we propose an algorithm for predicting performance of vision-based localization systems from different poses within the map. To achieve this, a metric is defined that assigns a score to a given query pose based on the underlying map structure. The algorithm is evaluated on two challenging datasets involving indoor data generated using a handheld device and outdoor data from an autonomous fixed-wing unmanned aerial vehicle (UAV). Using these, we are able to show that the score provided by our method is highly correlated to the true localization performance. Furthermore, we demonstrate how the predicted map quality can be used within a belief based path planning framework in order to provide reliable trajectories through high-quality areas of the map." ] }
1904.08585
2971352265
Robustness and safety are crucial properties for the real-world application of autonomous vehicles. One of the most critical components of any autonomous system is localisation. During the last 20 years there has been significant progress in this area with the introduction of very efficient algorithms for mapping, localisation and SLAM. Many of these algorithms present impressive demonstrations for a particular domain, but fail to operate reliably with changes to the operating environment. The aspect of robustness has not received enough attention and localisation systems for self-driving vehicle applications are seldom evaluated for their robustness. In this paper we propose novel metrics to effectively quantify localisation robustness with or without an accurate ground truth. The experimental results present a comprehensive analysis of the application of these metrics against a number of well known localisation strategies.
@cite_5 claim to have designed a visual localisation system that is can be operated at any time of the day or night, and in all weather conditions. The robustness metric they used is the portion of localisation failure in each specific dataset. Localisation failure is defined as not receiving an observation for more than 20m. This concept of robustness can be too restrictive as different localisation algorithms can have different failure modes.
{ "cite_N": [ "@cite_5" ], "mid": [ "2414248975" ], "abstract": [ "This paper is about camera-only localisation in challenging outdoor environments, where changes in lighting, weather and season cause traditional localisation systems to fail. Conventional approaches to the localisation problem rely on point-features such as SIFT, SURF or BRIEF to associate landmark observations in the live image with landmarks stored in the map; however, these features are brittle to the severe appearance change routinely encountered in outdoor environments. In this paper, we propose an alternative to traditional point-features: we train place-specific linear SVM classifiers to recognise distinctive elements in the environment. The core contribution of this paper is an unsupervised mining algorithm which operates on a single mapping dataset to extract distinct elements from the environment for localisation. We evaluate our system on 205km of data collected from central Oxford over a period of six months in bright sun, night, rain, snow and at all times of the day. Our experiment consists of a comprehensive N-vs-N analysis on 22 laps of the approximately 10km route in central Oxford. With our proposed system, the portion of the route where localisation fails is reduced by a factor of 6, from 33.3 to 5.5 ." ] }
1904.08585
2971352265
Robustness and safety are crucial properties for the real-world application of autonomous vehicles. One of the most critical components of any autonomous system is localisation. During the last 20 years there has been significant progress in this area with the introduction of very efficient algorithms for mapping, localisation and SLAM. Many of these algorithms present impressive demonstrations for a particular domain, but fail to operate reliably with changes to the operating environment. The aspect of robustness has not received enough attention and localisation systems for self-driving vehicle applications are seldom evaluated for their robustness. In this paper we propose novel metrics to effectively quantify localisation robustness with or without an accurate ground truth. The experimental results present a comprehensive analysis of the application of these metrics against a number of well known localisation strategies.
@cite_1 showed the robustness performance of their algorithms using a similar measure of probability of absence of updates metric proposed by our paper. This paper does not however provide a detailed explanation or focus on this particular metric.
{ "cite_N": [ "@cite_1" ], "mid": [ "2793048932" ], "abstract": [ "We present a method of improving visual place recognition and metric localisation under very strong appear- ance change. We learn an invertable generator that can trans- form the conditions of images, e.g. from day to night, summer to winter etc. This image transforming filter is explicitly designed to aid and abet feature-matching using a new loss based on SURF detector and dense descriptor maps. A network is trained to output synthetic images optimised for feature matching given only an input RGB image, and these generated images are used to localize the robot against a previously built map using traditional sparse matching approaches. We benchmark our results using multiple traversals of the Oxford RobotCar Dataset over a year-long period, using one traversal as a map and the other to localise. We show that this method significantly improves place recognition and localisation under changing and adverse conditions, while reducing the number of mapping runs needed to successfully achieve reliable localisation." ] }
1904.08585
2971352265
Robustness and safety are crucial properties for the real-world application of autonomous vehicles. One of the most critical components of any autonomous system is localisation. During the last 20 years there has been significant progress in this area with the introduction of very efficient algorithms for mapping, localisation and SLAM. Many of these algorithms present impressive demonstrations for a particular domain, but fail to operate reliably with changes to the operating environment. The aspect of robustness has not received enough attention and localisation systems for self-driving vehicle applications are seldom evaluated for their robustness. In this paper we propose novel metrics to effectively quantify localisation robustness with or without an accurate ground truth. The experimental results present a comprehensive analysis of the application of these metrics against a number of well known localisation strategies.
There is a considerable body of research focusing on improving visual localisation robustness due to the brittle nature of visual features with time of the day, lighting, or other environmental variables. One example of this research is @cite_9 . In this work, they design a ranking function and assign a quality rank for each map feature so that an adaptive feature selection policy can be enacted according to variation in the appearance of features due to events such as time of day or night. @cite_4 tested robustness against fast motion and undesirable lightening conditions. They concluded robustness and accuracy is a trade-off when using fish-eye cameras, and improved robustness by incorporating multi-sensor fusion. @cite_7 proposed geometric point quality, point recognition probability and localisation quality as robustness metrics, and presented methodology specific to visual features and localisation.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_7" ], "mid": [ "2560863959", "1512698229", "1982447260" ], "abstract": [ "In this paper, we present an online landmark selection method for distributed long-term visual localization systems in bandwidth-constrained environments. Sharing a common map for online localization provides a fleet of autonomous vehicles with the possibility to maintain and access a consistent map source, and therefore reduce redundancy while increasing efficiency. However, connectivity over a mobile network imposes strict bandwidth constraints and thus the need to minimize the amount of exchanged data. The wide range of varying appearance conditions encountered during long-term visual localization offers the potential to reduce data usage by extracting only those visual cues which are relevant at the given time. Motivated by this, we propose an unsupervised method of adaptively selecting landmarks according to how likely these landmarks are to be observable under the prevailing appearance condition. The ranking function this selection is based upon exploits landmark co-observability statistics collected in past traversals through the mapped area. Evaluation is performed over different outdoor environments, large time-scales and varying appearance conditions, including the extreme transition from day-time to night-time, demonstrating that with our appearance-dependent selection method, we can significantly reduce the amount of landmarks used for localization while maintaining or even improving the localization performance.", "Here, we present a general framework for combining visual odometry and lidar odometry in a fundamental and first principle method. The method shows improvements in performance over the state of the art, particularly in robustness to aggressive motion and temporary lack of visual features. The proposed on-line method starts with visual odometry to estimate the ego-motion and to register point clouds from a scanning lidar at a high frequency but low fidelity. Then, scan matching based lidar odometry refines the motion estimation and point cloud registration simultaneously.We show results with datasets collected in our own experiments as well as using the KITTI odometry benchmark. Our proposed method is ranked #1 on the benchmark in terms of average translation and rotation errors, with a 0.75 of relative position drift. In addition to comparison of the motion estimation accuracy, we evaluate robustness of the method when the sensor suite moves at a high speed and is subject to significant ambient lighting changes.", "The main contribution of this paper is to bridge the gap between passive monocular SLAM and autonomous robotic systems. While passive monocular SLAM strives to reconstruct the scene and determine the current camera pose for any given camera motion, not every camera motion is equally suited for these tasks. In this work we propose methods to evaluate the quality of camera motions with respect to the generation of new useful map points and localization maintenance. In our experiments, we demonstrate the effectiveness of our measures using a low-cost quadrocopter. The proposed system only requires a single passive camera as exteroceptive sensor. Due to its explorative nature, the system achieves autonomous way-point navigation in challenging, unknown, GPS-denied environments." ] }
1904.08585
2971352265
Robustness and safety are crucial properties for the real-world application of autonomous vehicles. One of the most critical components of any autonomous system is localisation. During the last 20 years there has been significant progress in this area with the introduction of very efficient algorithms for mapping, localisation and SLAM. Many of these algorithms present impressive demonstrations for a particular domain, but fail to operate reliably with changes to the operating environment. The aspect of robustness has not received enough attention and localisation systems for self-driving vehicle applications are seldom evaluated for their robustness. In this paper we propose novel metrics to effectively quantify localisation robustness with or without an accurate ground truth. The experimental results present a comprehensive analysis of the application of these metrics against a number of well known localisation strategies.
In the field of GNSS localisation, @cite_10 suggested validation of GPS signal with IMU as a measure for GPS integrity, since IMU errors are relatively constant and GPS errors are strongly influenced by environment.
{ "cite_N": [ "@cite_10" ], "mid": [ "1920612112" ], "abstract": [ "The majority of Intelligent Transportation System (ITS) applications require an estimate of position, often generated through the fusion of satellite based positioning (such as GPS) with on-board inertial systems. To make the position estimates consistent it is necessary to understand the noise distribution of the information used in the estimation algorithm. For GNSS position information the noise distribution is commonly approximated as zero mean with Gaussian distribution, with the standard deviation used as an algorithm tuning parameter. A major issue with satellite based positioning is the well known problem of multipath which can introduce a non-linear and non-Gaussian error distribution for the position estimate. This paper introduces a novel algorithm that compares the noise distribution of the GNSS information with the more consistent noise distribution of the local egocentric sensors to effectively reject GNSS data that is inconsistent. The results presented in this paper show how the gating of the GNSS information in a strong multipath environment can maintain consistency in the position filter and dramatically improve the position estimate. This is particularly important when sharing information from different vehicles as in the case of cooperative perception due to the requirement to align information from various sources." ] }
1904.08585
2971352265
Robustness and safety are crucial properties for the real-world application of autonomous vehicles. One of the most critical components of any autonomous system is localisation. During the last 20 years there has been significant progress in this area with the introduction of very efficient algorithms for mapping, localisation and SLAM. Many of these algorithms present impressive demonstrations for a particular domain, but fail to operate reliably with changes to the operating environment. The aspect of robustness has not received enough attention and localisation systems for self-driving vehicle applications are seldom evaluated for their robustness. In this paper we propose novel metrics to effectively quantify localisation robustness with or without an accurate ground truth. The experimental results present a comprehensive analysis of the application of these metrics against a number of well known localisation strategies.
Utilising datasets with accurate and absolute ground truth, @cite_8 and @cite_0 proposed relative displacement pose error (RPE), including relative translation and rotation errors, and absolute trajectory error (ATE) as accuracy metrics that provide a framework for evaluating and comparing SLAM in large public datasets. Due to lack of ground truth, @cite_12 tested localisation repeatability as a localisation performance metric in which the variance of position estimates from multiple robots moving along the same trajectories were compared. Similarly, @cite_3 evaluated algorithms by implementing feature drop out, as ground truth cannot be measured. @cite_2 evaluated robustness of bearing only SLAM by considering rotational and translational error with respect to simulations of variable landmark density, level of the rate of incorrect data association and compass noise.
{ "cite_N": [ "@cite_8", "@cite_3", "@cite_0", "@cite_2", "@cite_12" ], "mid": [ "2049479307", "1991793962", "2021851106", "", "2660163351" ], "abstract": [ "In this paper, we address the problem of creating an objective benchmark for evaluating SLAM approaches. We propose a framework for analyzing the results of a SLAM approach based on a metric for measuring the error of the corrected trajectory. This metric uses only relative relations between poses and does not rely on a global reference frame. This overcomes serious shortcomings of approaches using a global reference frame to compute the error. Our method furthermore allows us to compare SLAM approaches that use different estimation techniques or different sensor modalities since all computations are made based on the corrected trajectory of the robot. We provide sets of relative relations needed to compute our metric for an extensive set of datasets frequently used in the robotics community. The relations have been obtained by manually matching laser-range observations to avoid the errors caused by matching algorithms. Our benchmark framework allows the user to easily analyze and objectively compare different SLAM approaches.", "While the most accurate solution to off-line structure from motion (SFM) problems is undoubtedly to extract as much correspondence information as possible and perform global optimisation, sequential methods suitable for live video streams must approximate this to fit within fixed computational bounds. Two quite different approaches to real-time SFM — also called monocular SLAM (Simultaneous Localisation and Mapping) — have proven successful, but they sparsify the problem in different ways. Filtering methods marginalise out past poses and summarise the information gained over time with a probability distribution. Keyframe methods retain the optimisation approach of global bundle adjustment, but computationally must select only a small number of past frames to process. In this paper we perform the first rigorous analysis of the relative advantages of filtering and sparse optimisation for sequential monocular SLAM. A series of experiments in simulation as well using a real image SLAM system were performed by means of covariance propagation and Monte Carlo methods, and comparisons made using a combined cost accuracy measure. With some well-discussed reservations, we conclude that while filtering may have a niche in systems with low processing resources, in most modern applications keyframe optimisation gives the most accuracy per unit of computing time.", "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.", "", "The purpose of this project is to investigate solutions for Simultaneous Localization and Mapping (SLAM) in the context of Automated Guided Vehicles (AGVs). This thesis presents implementation details of a prototype system for AGVs, which was developed with the intention of allowing application-specific testing of SLAM. Three of the most prominent open-source SLAM algorithms, available in ROS, have been evaluated and critically compared. Furthermore, basic background and explanation of the critical problems of SLAM are presented. The SLAM algorithms have been evaluated based on resulting map quality as well as resource requirements. Map quality is tested based on both visual comparison with a ground truth and the correctness of estimated distances in the map. In addition to this, results are presented from tests which have been conducted in order to test the SLAMgenerated maps in AGV application areas. This includes tests where critical tasks, such as the robot’s precision while navigating with the use of a SLAM-generated map, has been assessed. The presented prototype system is based on the Robot Operating System (ROS), which includes state-of-the-art libraries and tools for robotic navigation. The prototype enables testing of navigation and mapping software available in ROS by publishing sensor data, from a physical AGV. The implemented software publishes readings from a laser range scanner, wheel encoders and a gyroscope. These sensor reading are published in a standardized way, making them accessible to applications in the ROS framework. The presented results from the tests answers the question of whether or not SLAM is suitable in the intended environment." ] }
1904.08083
2939704773
In the study of computational effects, it is important to consider the notion of computational effects with parameters. The need of such a notion arises when, for example, statically estimating the range of effects caused by a program, or studying the ways in which effects with local scopes are derived from effects with only the global scope. Extending the classical observation that computational effects can be modeled by monads, these computational effects with parameters are modeled by various mathematical structures including graded monads and indexed monads, which are two different generalizations of ordinary monads. The former has been employed in the semantics of effect systems, whereas the latter in the study of the relationship between the local state monads and the global state monads, each exemplifying the two situations mentioned above. However, despite their importance, the mathematical theory of graded and indexed monads is far less developed than that of ordinary monads. Here we develop the mathematical theory of graded and indexed monads from a 2-categorical viewpoint. We first introduce four 2-categories and observe that in two of them graded monads are in fact monads in the 2-categorical sense, and similarly indexed monads are monads in the 2-categorical sense in the other two. We then construct explicitly the Eilenberg--Moore and the Kleisli objects of graded monads, and the Eilenberg--Moore objects of indexed monads in the sense of Street in appropriate 2-categories among these four. The corresponding results for graded and indexed comonads also follow. We expect that the current work will provide a theoretical foundation to a unified study of computational effects with parameters, or dually (using the comonad variants), of computational resources with parameters, arising for example in Bounded Linear Logic.
Examples of include those behaviors of programs involving , , or . Moggi @cite_23 brought breakthrough to the theory of computational effects by advocating that these diverse kinds of computational effects can be uniformly modeled by . For example, corresponding to the notion of global state there is a monad @math (say, on @math ) called the , defined as @math where @math is the set of . Similarly, there are monads for nondeterministic branch, input output, etc. What is especially interesting in Moggi's approach is that, not only mathematical concepts that model these computational effects possess the suitable monad structures, the developed in pure mathematics can be fruitfully applied to the study of computational effects. Indeed, Moggi @cite_23 @cite_20 constructed categorical models of his calculus for computational effects via the .
{ "cite_N": [ "@cite_20", "@cite_23" ], "mid": [ "1997143185", "2156876717" ], "abstract": [ "Abstract The λ-calculus is considered a useful mathematical tool in the study of programming languages, since programs can be identified with λ-terms. However, if one goes further and uses βη-conversion to prove equivalence of programs, then a gross simplification is introduced (programs are identified with total functions from values to values ) that may jeopardise the applicability of theoretical results. In this paper we introduce calculi, based on a categorical semantics for computations , that provide a correct basis for proving equivalence of programs for a wide range of notions of computation .", "The lambda -calculus is considered a useful mathematical tool in the study of programming languages. However, if one uses beta eta -conversion to prove equivalence of programs, then a gross simplification is introduced. The author gives a calculus based on a categorical semantics for computations, which provides a correct basis for proving equivalence of programs, independent from any specific computational model. >" ] }
1904.08083
2939704773
In the study of computational effects, it is important to consider the notion of computational effects with parameters. The need of such a notion arises when, for example, statically estimating the range of effects caused by a program, or studying the ways in which effects with local scopes are derived from effects with only the global scope. Extending the classical observation that computational effects can be modeled by monads, these computational effects with parameters are modeled by various mathematical structures including graded monads and indexed monads, which are two different generalizations of ordinary monads. The former has been employed in the semantics of effect systems, whereas the latter in the study of the relationship between the local state monads and the global state monads, each exemplifying the two situations mentioned above. However, despite their importance, the mathematical theory of graded and indexed monads is far less developed than that of ordinary monads. Here we develop the mathematical theory of graded and indexed monads from a 2-categorical viewpoint. We first introduce four 2-categories and observe that in two of them graded monads are in fact monads in the 2-categorical sense, and similarly indexed monads are monads in the 2-categorical sense in the other two. We then construct explicitly the Eilenberg--Moore and the Kleisli objects of graded monads, and the Eilenberg--Moore objects of indexed monads in the sense of Street in appropriate 2-categories among these four. The corresponding results for graded and indexed comonads also follow. We expect that the current work will provide a theoretical foundation to a unified study of computational effects with parameters, or dually (using the comonad variants), of computational resources with parameters, arising for example in Bounded Linear Logic.
Nowadays, are becoming increasingly important. For example, there are type systems called @cite_13 , which statically estimate the range of effects caused by a program. Effect systems achieve this estimation by introducing, instead of a single effect (e.g., global state), a family of effects parametrized by ranges (e.g., global state together with a set of registers specified, with the intuition being that it contains those registers that are manipulated in an execution of the program). On the other hand, there is an active line of research on the nature of @cite_12 @cite_9 @cite_8 @cite_22 (which, in addition to manipulating registers, is able to increase or decrease the number of registers in use), especially on its relation to global state. One attracting view @cite_22 is that local state is obtained by gluing'' a family of global states with different numbers of registers, and there again the idea of parameters shows itself.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_9", "@cite_13", "@cite_12" ], "mid": [ "2115785609", "1963772255", "2148457191", "2046137117", "1521014576" ], "abstract": [ "One main challenge of the theory of computational effects is to understand how to combine various notions of effects in a meaningful way. Here, we study the particular case of the local state monad, which we would like to express as the result of combining together a family of global state monads parametrized by the number of available registers. To that purpose, we develop a notion of indexed monad which refines and generalizes Power's recent notion of indexed Lawvere theory. One main achievement of the paper is to integrate the block structure necessary to encode allocation as part of the resulting notion of indexed state monad. We then explain how to recover the local state monad from the functorial data provided by our notion of indexed state monad. This reconstruction is based on the guiding idea that an algebra of the indexed state monad should be defined as a section of a 2-categorical notion of fibration associated to the indexed state monad by a Grothendieck construction.", "Starting with Moggi's work on monads as refined to Lawvere theories, we give a general construct that extends denotational semantics for a global computational effect canonically to yield denotational semantics for a corresponding local computational effect. Our leading example yields a construction of the usual denotational semantics for local state from that for global state. Given any Lawvere theory L, possibly countable and possibly enriched, we first give a universal construction that extends L, hence the global operations and equations of a given effect, to incorporate worlds of arbitrary finite size. Then, making delicate use of the final comodel of the ordinary Lawvere theory L, we give a construct that uniformly allows us to model block, the universality of the final comodel yielding a universal property of the construct. We illustrate both the universal extension of L and the canonical construction of block by seeing how they work in the case of state.", "Monads for global state and local state have been used to provide semantics of programming languages for many years. There is a computation- ally natural presentation of an ordinary Lawvere theory that corresponds to the monad on Set for global state, inevitably called the Lawvere theory for global state. Here, we introduce a notion of indexed Lawvere theory and use it to give a Lawvere-style account of local state, extending the theorem for global state to local state. En route, we develop the notion of comodel of a Lawvere theory and exploit a universal characterisation of the category of worlds for local state. Ultimately, we give both syntactic and semantic characterisations of the operation block that allows one to move between worlds and use them to characterise the monad for local state.", "We present a new approach to programming languages for parallel computers that uses an effect system to discover expression scheduling constraints. This effect system is part of a 'kinded' type system with three base kinds: types , which describe the value that an expression may return; effects , which describe the side-effects that an expression may have; and regions , which describe the area of the store in which side-effects may occur. Types, effects and regions are collectively called descriptions . Expressions can be abstracted over any kind of description variable -- this permits type, effect and region polymorphism. Unobservable side-effects can be masked by the effect system; an effect soundness property guarantees that the effects computed statically by the effect system are a conservative approximation of the actual side-effects that a given expression may have. The effect system we describe performs certain kinds of side-effect analysis that were not previously feasible. Experimental data from the programming language FX indicate that an effect system can be used effectively to compile programs for parallel computers.", "We model notions of computation using algebraic operations and equations. We show that these generate several of the monads of primary interest that have been used to model computational effects, with the striking omission of the continuations monad. We focus on semantics for global and local state, showing that taking operations and equations as primitive yields a mathematical relationship that reflects their computational relationship." ] }
1904.08083
2939704773
In the study of computational effects, it is important to consider the notion of computational effects with parameters. The need of such a notion arises when, for example, statically estimating the range of effects caused by a program, or studying the ways in which effects with local scopes are derived from effects with only the global scope. Extending the classical observation that computational effects can be modeled by monads, these computational effects with parameters are modeled by various mathematical structures including graded monads and indexed monads, which are two different generalizations of ordinary monads. The former has been employed in the semantics of effect systems, whereas the latter in the study of the relationship between the local state monads and the global state monads, each exemplifying the two situations mentioned above. However, despite their importance, the mathematical theory of graded and indexed monads is far less developed than that of ordinary monads. Here we develop the mathematical theory of graded and indexed monads from a 2-categorical viewpoint. We first introduce four 2-categories and observe that in two of them graded monads are in fact monads in the 2-categorical sense, and similarly indexed monads are monads in the 2-categorical sense in the other two. We then construct explicitly the Eilenberg--Moore and the Kleisli objects of graded monads, and the Eilenberg--Moore objects of indexed monads in the sense of Street in appropriate 2-categories among these four. The corresponding results for graded and indexed comonads also follow. We expect that the current work will provide a theoretical foundation to a unified study of computational effects with parameters, or dually (using the comonad variants), of computational resources with parameters, arising for example in Bounded Linear Logic.
Corresponding to this on the side of computational effects, various notions of have been employed in the study of those computational effects with parameters. The notion of has been used to give semantics of effect systems @cite_5 , whereas the notion of arose and has been applied in the study of local state @cite_9 @cite_8 @cite_22 .
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_22", "@cite_8" ], "mid": [ "2007738069", "2148457191", "2115785609", "1963772255" ], "abstract": [ "We study fundamental properties of a generalisation of monad called parametric effect monad, and apply it to the interpretation of general effect systems whose effects have sequential composition operators. We show that parametric effect monads admit analogues of the structures and concepts that exist for monads, such as Kleisli triples, the state monad and the continuation monad, Plotkin and Power's algebraic operations, and the categorical ┬┬-lifting. We also show a systematic method to generate both effects and a parametric effect monad from a monad morphism. Finally, we introduce two effect systems with explicit and implicit subeffecting, and discuss their denotational semantics and the soundness of effect systems.", "Monads for global state and local state have been used to provide semantics of programming languages for many years. There is a computation- ally natural presentation of an ordinary Lawvere theory that corresponds to the monad on Set for global state, inevitably called the Lawvere theory for global state. Here, we introduce a notion of indexed Lawvere theory and use it to give a Lawvere-style account of local state, extending the theorem for global state to local state. En route, we develop the notion of comodel of a Lawvere theory and exploit a universal characterisation of the category of worlds for local state. Ultimately, we give both syntactic and semantic characterisations of the operation block that allows one to move between worlds and use them to characterise the monad for local state.", "One main challenge of the theory of computational effects is to understand how to combine various notions of effects in a meaningful way. Here, we study the particular case of the local state monad, which we would like to express as the result of combining together a family of global state monads parametrized by the number of available registers. To that purpose, we develop a notion of indexed monad which refines and generalizes Power's recent notion of indexed Lawvere theory. One main achievement of the paper is to integrate the block structure necessary to encode allocation as part of the resulting notion of indexed state monad. We then explain how to recover the local state monad from the functorial data provided by our notion of indexed state monad. This reconstruction is based on the guiding idea that an algebra of the indexed state monad should be defined as a section of a 2-categorical notion of fibration associated to the indexed state monad by a Grothendieck construction.", "Starting with Moggi's work on monads as refined to Lawvere theories, we give a general construct that extends denotational semantics for a global computational effect canonically to yield denotational semantics for a corresponding local computational effect. Our leading example yields a construction of the usual denotational semantics for local state from that for global state. Given any Lawvere theory L, possibly countable and possibly enriched, we first give a universal construction that extends L, hence the global operations and equations of a given effect, to incorporate worlds of arbitrary finite size. Then, making delicate use of the final comodel of the ordinary Lawvere theory L, we give a construct that uniformly allows us to model block, the universality of the final comodel yielding a universal property of the construct. We illustrate both the universal extension of L and the canonical construction of block by seeing how they work in the case of state." ] }
1904.08375
2937036051
One technique to improve the retrieval effectiveness of a search engine is to expand documents with terms that are related or representative of the documents' content. From the perspective of a question answering system, a useful representation of a document might comprise the questions it can potentially answer. Following this observation, we propose a simple method that predicts which queries will be issued for a given document and then expands it with those predictions. Our predictions are made with a vanilla sequence-to-sequence model trained with supervised learning using a dataset of pairs of query and relevant documents. By combining our method with a highly-effective re-ranking component, we achieve the state of the art in two retrieval tasks. In a latency-critical regime, retrieval results alone (without the re-ranking component) approach the effectiveness of more computationally expensive neural re-rankers while taking only a fraction of the query latency.
Prior to the advent of continuous vector space representations and neural ranking models, information retrieval techniques were mostly limited to keyword matching (i.e., one-hot'' representations). Alternatives such as latent semantic indexing @cite_9 and its various successors never really gained significant traction. Approaches to tackling the vocabulary mismatch problem within these constraints include relevance feedback @cite_2 , query expansion @cite_20 @cite_0 , and modeling term relationships using statistical translation @cite_6 . These techniques share in their focus on enhancing query representations to better match documents.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_0", "@cite_2", "@cite_20" ], "mid": [ "2147152072", "2062270497", "2002306339", "2164547069", "2105106523" ], "abstract": [ "A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.", "We propose a new probabilistic approach to information retrieval based upon the ideas and methods of statistical machine translation. The central ingredient in this approach is a statistical model of how a user might distill or \"translate\" a given document into a query. To assess the relevance of a document to a user's query, we estimate the probability that the query would have been generated as a translation of the document, and factor in the user's general preferences in the form of a prior distribution over documents. We propose a simple, well motivated model of the document-to-query translation process, and describe an algorithm for learning the parameters of this model in an unsupervised manner from a collection of documents. As we show, one can view this approach as a generalization and justification of the \"language modeling\" strategy recently proposed by Ponte and Croft. In a series of experiments on TREC data, a simple translation-based retrieval system performs well in comparison to conventional retrieval techniques. This prototype system only begins to tap the full potential of translation-based retrieval.", "Techniques for automatic query expansion have been extensively studied in information research as a means of addressing the word mismatch between queries and documents. These techniques can be categorized as either global or local. While global techniques rely on analysis of a whole collection to discover word relationships, local techniques emphasize analysis of the top-ranked documents retrieved for a query. While local techniques have shown to be more effective that global techniques in general, existing local techniques are not robust and can seriously hurt retrieved when few of the retrieval documents are relevant. We propose a new technique, called local context analysis, which selects expansion terms based on cooccurrence with the query terms within the top-ranked documents. Experiments on a number of collections, both English and non-English, show that local context analysis offers more effective and consistent retrieval results.", "1332840 Primer compositions DOW CORNINGCORP 6 Oct 1971 [30 Dec 1970] 46462 71 Heading C3T [Also in Divisions B2 and C4] A primer composition comprises 1 pbw of tetra ethoxy or propoxy silane or poly ethyl or propyl silicate or any mixture thereof, 0A75-2A5 pbw of bis(acetylacetonyl) diisopropyl titanate, 0A75- 5 pbw of a compound CF 3 CH 2 CH 2 Si[OSi(CH 3 ) 2 - X] 3 wherein each X is H or -CH 2 CH 2 Si- (OOCCH 3 ) 3 , at least one being the latter, and 1-20 pbw of a ketone, hydrocarbon or halohydrocarbon solvent boiling not above 150‹ C. In the examples 1 pbw each of bis(acetylacetonyl)diisopropyl titanate, polyethyl silicate and are dissolved in 10 pbw of acetone or in 9 pbw of light naphtha and 1 of methylisobutylketone. The solutions are used to prime Ti panels, to which a Pt-catalysed room-temperature vulcanizable poly-trifluoropropylmethyl siloxanebased rubber is then applied.", "Applications such as office automation, news filtering, help facilities in complex systems, and the like require the ability to retrieve documents from full-text databases where vocabulary problems can be particularly severe. Experiments performed on small collections with single-domain thesauri suggest that expanding query vectors with words that are lexically related to the original query words can ameliorate some of the problems of mismatched vocabularies. This paper examines the utility of lexical query expansion in the large, diverse TREC collection. Concepts are represented by WordNet synonym sets and are expanded by following the typed links included in WordNet. Experimental results show this query expansion technique makes little difference in retrieval effectiveness if the original queries are relatively complete descriptions of the information being sought even when the concepts to be expanded are selected by hand. Less well developed queries can be significantly improved by expansion of hand-chosen concepts. However, an automatic procedure that can approximate the set of hand picked synonym sets has yet to be devised, and expanding by the synonym sets that are automatically generated can degrade retrieval performance." ] }
1904.08375
2937036051
One technique to improve the retrieval effectiveness of a search engine is to expand documents with terms that are related or representative of the documents' content. From the perspective of a question answering system, a useful representation of a document might comprise the questions it can potentially answer. Following this observation, we propose a simple method that predicts which queries will be issued for a given document and then expands it with those predictions. Our predictions are made with a vanilla sequence-to-sequence model trained with supervised learning using a dataset of pairs of query and relevant documents. By combining our method with a highly-effective re-ranking component, we achieve the state of the art in two retrieval tasks. In a latency-critical regime, retrieval results alone (without the re-ranking component) approach the effectiveness of more computationally expensive neural re-rankers while taking only a fraction of the query latency.
In this work, we adopt the alternative approach of enriching document representations @cite_15 @cite_11 @cite_18 , which works particularly well for speech @cite_14 and multi-lingual retrieval, where terms are noisy. Document expansion techniques have been less popular with IR researchers because they are less amenable to rapid experimentation. The corpus needs to be re-indexed every time the expansion technique changes (typically, a costly process); in contrast, manipulations to query representations can happen at retrieval time (and hence are much faster). The success of document expansion has also been mixed; for example, explore both query expansion and document expansion in the same framework and conclude that the former is consistently more effective.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_18", "@cite_11" ], "mid": [ "2129971563", "2028282769", "2048978851", "2149768609" ], "abstract": [ "Language model information retrieval depends on accurate estimation of document models. In this paper, we propose a document expansion technique to deal with the problem of insufficient sampling of documents. We construct a probabilistic neighborhood for each document, and expand the document with its neighborhood information. The expanded document provides a more accurate estimation of the document model, thus improves retrieval accuracy. Moreover, since document expansion and pseudo feedback exploit different corpus structures, they can be combined to further improve performance. The experiment results on several different data sets demonstrate the effectiveness of the proposed document expansion method.", "Methods of document expansion for a speech retrieval document by a recognizer. A database of vectors of automatic transcriptions of documents is accessed and the vectors are truncated by removing all terms that are not recognizable by the recognizer to create truncated vectors. Terms in the vectors are then weighted to associate the truncated vectors with the untruncated vectors. Terms not recognized by the recognizer are then added back to the weighted, truncated vectors. The retrieval effectiveness may then be measured.", "Collections containing a large number of short documents are becoming increasingly common. As these collections grow in number and size, providing effective retrieval of brief texts presents a significant research problem. We propose a novel approach to improving information retrieval (IR) for short texts based on aggressive document expansion. Starting from the hypothesis that short documents tend to be about a single topic, we submit documents as pseudo-queries and analyze the results to learn about the documents themselves. Document expansion helps in this context because short documents yield little in the way of term frequency information. However, as we show, the proposed technique helps us model not only lexical properties, but also temporal properties of documents. We present experimental results using a corpus of microblog (Twitter) data and a corpus of metadata records from a federated digital library. With respect to established baselines, results of these experiments show that applying our proposed document expansion method yields significant improvements in effectiveness. Specifically, our method improves the lexical representation of documents and the ability to let time influence retrieval.", "Traditional interactive information retrieval systems function by creating inverted lists, or term indexes. For every term in the vocabulary, a list is created that contains the documents in which that term occurs and its relative frequency within each document. Retrieval algorithms then use these term frequencies alongside other collection statistics to identify the matching documents for a query. In this paper, we turn the process around: instead of indexing documents, we index query result sets. First, queries are run through a chosen retrieval system. For each query, the resulting document IDs are treated as terms and the score or rank of the document is used as the frequency statistic. An index of documents retrieved by basis queries is created. We call this index a reverted index. With reverted indexes, standard retrieval algorithms can retrieve the matching queries (as results) for a set of documents (used as queries). These recovered queries can then be used to identify additional documents, or to aid the user in query formulation, selection, and feedback." ] }
1904.08375
2937036051
One technique to improve the retrieval effectiveness of a search engine is to expand documents with terms that are related or representative of the documents' content. From the perspective of a question answering system, a useful representation of a document might comprise the questions it can potentially answer. Following this observation, we propose a simple method that predicts which queries will be issued for a given document and then expands it with those predictions. Our predictions are made with a vanilla sequence-to-sequence model trained with supervised learning using a dataset of pairs of query and relevant documents. By combining our method with a highly-effective re-ranking component, we achieve the state of the art in two retrieval tasks. In a latency-critical regime, retrieval results alone (without the re-ranking component) approach the effectiveness of more computationally expensive neural re-rankers while taking only a fraction of the query latency.
A new generation of neural ranking models offer solutions to the vocabulary mismatch problem based on continuous word representations and the ability to learn highly non-linear models of relevance; see recent overviews by and . However, due to the size of most corpora and the impracticality of applying inference over every document in response to a query, nearly all implementations today deploy neural networks as re-rankers over initial candidate sets retrieved using standard inverted indexes and a term-based ranking model such as BM25 @cite_8 . Our work fits into this broad approach, where we take advantage of neural networks to augment document representations prior to indexing; term-based retrieval then happens exactly as before. Of course, retrieved results can still be re-ranked by a state-of-the-art neural model @cite_13 , but the output of term-based ranking already appears to be quite good. In other words, our document expansion approach can leverage neural networks without their high inference-time costs.
{ "cite_N": [ "@cite_13", "@cite_8" ], "mid": [ "2909544278", "1482214997" ], "abstract": [ "Recently, neural models pretrained on a language modeling task, such as ELMo (, 2017), OpenAI GPT (, 2018), and BERT (, 2018), have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. In this paper, we describe a simple re-implementation of BERT for query-based passage re-ranking. Our system is the state of the art on the TREC-CAR dataset and the top entry in the leaderboard of the MS MARCO passage retrieval task, outperforming the previous state of the art by 27 (relative) in MRR@10. The code to reproduce our results is available at this https URL", "City submitted two runs each for the automatic ad hoc, very large collection track, automatic routing and Chinese track; and took part in the interactive and filtering tracks. The method used was : expansion using terms from the top documents retrieved by a pilot search on topic terms. Additional runs seem to show that we would have done better without expansion. Twor runs using the method of city96al were also submitted for the Very Large Collection track. The training database and its relevant documents were partitioned into three parts. Working on a pool of terms extracted from the relevant documents for one partition, an iterative procedure added or removed terms and or varied their weights. After each change in query content or term weights a score was calculated by using the current query to search a second protion of the training database and evaluating the results against the corresponding set of relevant documents. Methods were compared by evaluating queries predictively against the third training partition. Queries from different methods were then merged and the results evaluated in the same way. Two runs were submitted, one based on character searching and the other on words or phrases. Much of the work involved investigating plausible methods of applying Okapi-style weighting to phrases" ] }
1904.08279
2937348976
Deep computer vision systems being vulnerable to imperceptible and carefully crafted noise have raised questions regarding the robustness of their decisions. We take a step back and approach this problem from an orthogonal direction. We propose to enable black-box neural networks to justify their reasoning both for clean and for adversarial examples by leveraging attributes, i.e. visually discriminative properties of objects. We rank attributes based on their class relevance, i.e. how the classification decision changes when the input is visually slightly perturbed, as well as image relevance, i.e. how well the attributes can be localized on both clean and perturbed images. We present comprehensive experiments for attribute prediction, adversarial example generation, adversarially robust learning, and their qualitative and quantitative analysis using predicted attributes on three benchmark datasets.
Adversarial Examples. Small carefully crafted perturbations, i.e. , added to the inputs of deep neural networks, i.e. , can easily fool the classifiers trained using deep learning @cite_9 . Such attacks involve iterative fast gradient sign method @cite_45 , Jacobian-based saliency map attacks @cite_12 , one pixel attacks @cite_26 , Carlini and Wagner attacks @cite_53 and universal attacks @cite_51 designed not only for classificaton but for object detection @cite_13 , segmentation @cite_7 , auto encoders @cite_32 , generative models @cite_44 , and reinforcement learning @cite_39 . Most of these perturbations are transferable between different networks and do not require access to the network's architecture or parameters, i.e. .
{ "cite_N": [ "@cite_13", "@cite_26", "@cite_7", "@cite_53", "@cite_9", "@cite_32", "@cite_39", "@cite_44", "@cite_45", "@cite_51", "@cite_12" ], "mid": [ "2604505099", "2964006983", "2963491874", "2963857521", "2964153729", "2559944883", "2941205169", "2594717275", "2963542245", "2243397390", "2180612164" ], "abstract": [ "It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, cause deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e.g., the target is a pixel or a receptive field in segmentation, and an object proposal in detection). This inspires us to optimize a loss function over a set of targets for generating adversarial perturbations. Based on this, we propose a novel algorithm named Dense Adversary Generation (DAG), which applies to the state-of-the-art networks for segmentation and detection. We find that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. In particular, the transfer ability across networks with the same architecture is more significant than in other cases. Besides, we show that summing up heterogeneous perturbations often leads to better transfer performance, which provides an effective method of black-box adversarial attack.", "Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution(DE). It requires less adversarial information(a black-box attack) and can fool more types of networks due to the inherent features of DE. The results show that 68.36 of the natural images in CIFAR-10 test dataset and 41.22 of the ImageNet (ILSVRC 2012) validation images can be perturbed to at least one target class by modifying just one pixel with 73.22 and 5.52 confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks. Besides, we also illustrate an important application of DE (or broadly speaking, evolutionary computation) in the domain of adversarial machine learning: creating tools that can effectively generate low-cost adversarial attacks against neural networks for evaluating robustness.", "", "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95 to 0.5 .In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100 probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.", "Abstract: Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.", "We investigate adversarial attacks for autoencoders. We propose a procedure that distorts the input image to mislead the autoencoder in reconstructing a completely different target image. We attack the internal latent representations, attempting to make the adversarial input produce an internal representation as similar as possible as the target's. We find that autoencoders are much more robust to the attack than classifiers: while some examples have tolerably small input distortion, and reasonable similarity to the target image, there is a quasi-linear trade-off between those aims. We report results on MNIST and SVHN datasets, and also test regular deterministic autoencoders, reaching similar conclusions in all cases. Finally, we show that the usual adversarial attack for classifiers, while being much easier, also presents a direct proportion between distortion on the input, and misdirection on the output. That proportionality however is hidden by the normalization of the output, which maps a linear layer into non-linear probabilities.", "", "We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present three classes of attacks on the VAE and VAE-GAN architectures and demonstrate them against networks trained on MNIST, SVHN and CelebA. Our first attack leverages classification-based adversaries by attaching a classifier to the trained encoder of the target generative model, which can then be used to indirectly manipulate the latent representation. Our second attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our third attack moves beyond relying on classification or the standard loss for the gradient and directly optimizes against differences in source and target latent representations. We also motivate why an attacker might be interested in deploying such techniques against a target generative network.", "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work has assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from a cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.", "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.1", "Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97 adversarial success rate while only modifying on average 4.02 of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification." ] }
1904.08199
2937263158
Community-based Question and Answering (CQA) platforms are nowadays enlightening over a billion people with crowdsourced knowledge. A key design issue in CQA platforms is how to find the potential answerers and to provide the askers timely and suitable answers, i.e., the so-called problem. State-of-art approaches often rely on extracting topics from the question texts. In this work, we analyze the question routing problem in a CQA system named Farm-Doctor that is exclusive for agricultural knowledge. The major challenge is that its questions contain limited textual information. To this end, we conduct an extensive measurement and obtain the whole knowledge repository of Farm-Doctor that consists of over 690 thousand questions and over 3 million answers. To remedy the text deficiency, we model Farm-Doctor as a heterogeneous information network that incorporates rich side information and based on network representation learning models we accurately recommend for each question the users that are highly likely to answer it. With an average income of fewer than 6 dollars a day, over 300 thousands farmers in China seek online in Farm-Doctor for agricultural advices. Our method helps these less eloquent farmers with their cultivation and hopefully provides a way to improve their lives.
CQA platforms have been extensively studied before. Two comprehensive surveys can be found in @cite_13 @cite_3 . We summarize the related works for the main research topics in CQA as follows.
{ "cite_N": [ "@cite_13", "@cite_3" ], "mid": [ "2514077680", "2884757772" ], "abstract": [ "Community question-answering (CQA) systems, such as Yahooe Answers or Stack Overflow, belong to a prominent group of successful and popular Web 2.0 applications, which are used every day by millions of users to find an answer on complex, subjective, or context-dependent questions. In order to obtain answers effectively, CQA systems should optimally harness collective intelligence of the whole online community, which will be impossible without appropriate collaboration support provided by information technologies. Therefore, CQA became an interesting and promising subject of research in computer science and now we can gather the results of 10 years of research. Nevertheless, in spite of the increasing number of publications emerging each year, so far the research on CQA systems has missed a comprehensive state-of-the-art survey. We attempt to fill this gap by a review of 265 articles published between 2005 and 2014, which were selected from major conferences and journals. According to this evaluation, at first we propose a framework that defines descriptive attributes of CQA approaches. Second, we introduce a classification of all approaches with respect to problems they are aimed to solve. The classification is consequently employed in a review of a significant number of representative approaches, which are described by means of attributes from the descriptive framework. As a part of the survey, we also depict the current trends as well as highlight the areas that require further attention from the research community.", "Community question answering (CQA) represents the type of Web applications where people can exchange knowledge via asking and answering questions. One significant challenge of most real-world CQA systems is the lack of effective matching between questions and the potential good answerers, which adversely affects the efficient knowledge acquisition and circulation. On the one hand, a requester might experience many low-quality answers without receiving a quality response in a brief time; on the other hand, an answerer might face numerous new questions without being able to identify the questions of interest quickly. Under this situation, expert recommendation emerges as a promising technique to address the above issues. Instead of passively waiting for users to browse and find their questions of interest, an expert recommendation method raises the attention of users to the appropriate questions actively and promptly. The past few years have witnessed considerable efforts that address the expert recommendation problem from different perspectives. These methods all have their issues that need to be resolved before the advantages of expert recommendation can be fully embraced. In this survey, we first present an overview of the research efforts and state-of-the-art techniques for the expert recommendation in CQA. We next summarize and compare the existing methods concerning their advantages and shortcomings, followed by discussing the open issues and future research directions." ] }
1904.08199
2937263158
Community-based Question and Answering (CQA) platforms are nowadays enlightening over a billion people with crowdsourced knowledge. A key design issue in CQA platforms is how to find the potential answerers and to provide the askers timely and suitable answers, i.e., the so-called problem. State-of-art approaches often rely on extracting topics from the question texts. In this work, we analyze the question routing problem in a CQA system named Farm-Doctor that is exclusive for agricultural knowledge. The major challenge is that its questions contain limited textual information. To this end, we conduct an extensive measurement and obtain the whole knowledge repository of Farm-Doctor that consists of over 690 thousand questions and over 3 million answers. To remedy the text deficiency, we model Farm-Doctor as a heterogeneous information network that incorporates rich side information and based on network representation learning models we accurately recommend for each question the users that are highly likely to answer it. With an average income of fewer than 6 dollars a day, over 300 thousands farmers in China seek online in Farm-Doctor for agricultural advices. Our method helps these less eloquent farmers with their cultivation and hopefully provides a way to improve their lives.
recommends potential answers for each question, which is the focus of this work. Earlier works on question routing are mainly based on the past answering activities of the users. Li @cite_17 utilized the query likelihood language model based on answerers performance profile to estimate each answerers expertise and they further incorporate the category information @cite_21 .
{ "cite_N": [ "@cite_21", "@cite_17" ], "mid": [ "2075196988", "2036226015" ], "abstract": [ "This paper investigates a ground-breaking incorporation of question category to Question Routing (QR) in Community Question Answering (CQA) services. The incorporation of question category was designed to estimate answerer expertise for routing questions to potential answerers. Two category-sensitive Language Models (LMs) were developed with large-scale real world data sets being experimented. Results demonstrated that higher accuracies of routing questions with lower computational costs were achieved, relative to traditional Query Likelihood LM (QLLM), state-of-the-art Cluster-Based LM (CBLM) and the mixture of Latent Dirichlet Allocation and QLLM (LDALM).", "Community Question Answering (CQA) service provides a platform for increasing number of users to ask and answer for their own needs but unanswered questions still exist within a fixed period. To address this, the paper aims to route questions to the right answerers who have a top rank in accordance of their previous answering performance. In order to rank the answerers, we propose a framework called Question Routing (QR) which consists of four phases: (1) performance profiling, (2) expertise estimation, (3) availability estimation, and (4) answerer ranking. Applying the framework, we conduct experiments with Yahoo! Answers dataset and the results demonstrate that on average each of 1,713 testing questions obtains at least one answer if it is routed to the top 20 ranked answerers." ] }
1904.08199
2937263158
Community-based Question and Answering (CQA) platforms are nowadays enlightening over a billion people with crowdsourced knowledge. A key design issue in CQA platforms is how to find the potential answerers and to provide the askers timely and suitable answers, i.e., the so-called problem. State-of-art approaches often rely on extracting topics from the question texts. In this work, we analyze the question routing problem in a CQA system named Farm-Doctor that is exclusive for agricultural knowledge. The major challenge is that its questions contain limited textual information. To this end, we conduct an extensive measurement and obtain the whole knowledge repository of Farm-Doctor that consists of over 690 thousand questions and over 3 million answers. To remedy the text deficiency, we model Farm-Doctor as a heterogeneous information network that incorporates rich side information and based on network representation learning models we accurately recommend for each question the users that are highly likely to answer it. With an average income of fewer than 6 dollars a day, over 300 thousands farmers in China seek online in Farm-Doctor for agricultural advices. Our method helps these less eloquent farmers with their cultivation and hopefully provides a way to improve their lives.
A number of studies have addressed the question routing problem as a classification task. Yang @cite_14 proposed a probabilistic Topic Expertise Model which used tagging information and voting information to learn topical expertise estimation. They also proposed CQArank model that combines user topical expertise estimation and user authority derived from link analysis to find experts with both similar topical preference and high topical expertise.
{ "cite_N": [ "@cite_14" ], "mid": [ "2022149269" ], "abstract": [ "Community Question Answering (CQA) websites, where people share expertise on open platforms, have become large repositories of valuable knowledge. To bring the best value out of these knowledge repositories, it is critically important for CQA services to know how to find the right experts, retrieve archived similar questions and recommend best answers to new questions. To tackle this cluster of closely related problems in a principled approach, we proposed Topic Expertise Model (TEM), a novel probabilistic generative model with GMM hybrid, to jointly model topics and expertise by integrating textual content model and link structure analysis. Based on TEM results, we proposed CQARank to measure user interests and expertise score under different topics. Leveraging the question answering history based on long-term community reviews and voting, our method could find experts with both similar topical preference and high topical expertise. Experiments carried out on Stack Overflow data, the largest CQA focused on computer programming, show that our method achieves significant improvement over existing methods on multiple metrics." ] }
1904.08199
2937263158
Community-based Question and Answering (CQA) platforms are nowadays enlightening over a billion people with crowdsourced knowledge. A key design issue in CQA platforms is how to find the potential answerers and to provide the askers timely and suitable answers, i.e., the so-called problem. State-of-art approaches often rely on extracting topics from the question texts. In this work, we analyze the question routing problem in a CQA system named Farm-Doctor that is exclusive for agricultural knowledge. The major challenge is that its questions contain limited textual information. To this end, we conduct an extensive measurement and obtain the whole knowledge repository of Farm-Doctor that consists of over 690 thousand questions and over 3 million answers. To remedy the text deficiency, we model Farm-Doctor as a heterogeneous information network that incorporates rich side information and based on network representation learning models we accurately recommend for each question the users that are highly likely to answer it. With an average income of fewer than 6 dollars a day, over 300 thousands farmers in China seek online in Farm-Doctor for agricultural advices. Our method helps these less eloquent farmers with their cultivation and hopefully provides a way to improve their lives.
. Ji @cite_0 combined similarity features, translation features, and so on, and adopted RankingSVM to rank potential users to a newly posted question. Shen @cite_11 putted a similarity matrix constructed by each pair of question and answer into a deep architecture model which was able to explicitly capture the word sequence information and the lexical information in matching two structured objects to find potentially suitable answers. Chen @cite_4 proposed a heterogenous CQA network denoted by category-question pairs, question-user pairs, and user-user pairs, then based on random walk, they learned questions features through neural networks and users features through a network representation learning model named Deepwalk @cite_23 . These features are used to calculate the matching score between questions.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_23", "@cite_11" ], "mid": [ "1976727267", "2558550198", "2154851992", "2264274310" ], "abstract": [ "This paper focuses on the problem of Question Routing (QR) in Community Question Answering (CQA), which aims to route newly posted questions to the potential answerers who are most likely to answer them. Traditional methods to solve this problem only consider the text similarity features between the newly posted question and the user profile, while ignoring the important statistical features, including the question-specific statistical feature and the user-specific statistical features. Moreover, traditional methods are based on unsupervised learning, which is not easy to introduce the rich features into them. This paper proposes a general framework based on the learning to rank concepts for QR. Training sets consist of triples (q, asker, answerers) are first collected. Then, by introducing the intrinsic relationships between the asker and the answerers in each CQA session to capture the intrinsic labels orders of the users about their expertise degree of the question q, two different methods, including the SVM-based and RankingSVM-based methods, are presented to learn the models with different example creation processes from the training set. Finally, the potential answerers are ranked using the trained models. Extensive experiments conducted on a real world CQA dataset from Stack Overflow show that our proposed two methods can both outperform the traditional query likelihood language model (QLLM) as well as the state-of-the-art Latent Dirichlet Allocation based model (LDA). Specifically, the RankingSVM-based method achieves statistical significant improvements over the SVM-based method and has gained the best performance.", "Community based question answering platforms have attracted substantial users to share knowledge and learn from each other. As the rapid enlargement of CQA platforms, quantities of overlapped questions emerge, which makes users confounded to select a proper reference. It is urgent for us to take effective automated algorithms to reuse historical questions with corresponding answers. In this paper we focus on the problem with question retrieval, which aims to match historical questions that are relevant or semantically equivalent to resolve one s query directly. The challenges in this task are the lexical gaps between questions for the word ambiguity and word mismatch problem. Furthermore, limited words in queried sentences cause sparsity of word features. To alleviate these challenges, we propose a novel framework named HNIL which encodes not only the question contents but also the askers social interactions to enhance the question embedding performance. More specifically, we apply random walk based learning method with recurrent neural network to match the similarities between askers question and historical questions proposed by other users. Extensive experiments on a large scale dataset from a real world CQA site show that employing the heterogeneous social network information outperforms the other state of the art solutions in this task.", "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "Community-based Question Answering (CQA) has become popular in knowledge sharing sites since it allows users to get answers to complex, detailed, and personal questions directly from other users. Large archives of historical questions and associated answers have been accumulated. Retrieving relevant historical answers that best match a question is an essential component of a CQA service. Most state of the art approaches are based on bag-of-words models, which have been proven successful in a range of text matching tasks, but are insufficient for capturing the important word sequence information in short text matching. In this paper, a new architecture is proposed to more effectively model the complicated matching relations between questions and answers. It utilises a similarity matrix which contains both lexical and sequential information. Afterwards the information is put into a deep architecture to find potentially suitable answers. The experimental study shows its potential in improving matching accuracy of question and answer." ] }
1904.08386
2939911796
Literary critics often attempt to uncover meaning in a single work of literature through careful reading and analysis. Applying natural language processing methods to aid in such literary analyses remains a challenge in digital humanities. While most previous work focuses on "distant reading" by algorithmically discovering high-level patterns from large collections of literary works, here we sharpen the focus of our methods to a single literary theory about Italo Calvino's postmodern novel Invisible Cities, which consists of 55 short descriptions of imaginary cities. Calvino has provided a classification of these cities into eleven thematic groups, but literary scholars disagree as to how trustworthy his categorization is. Due to the unique structure of this novel, we can computationally weigh in on this debate: we leverage pretrained contextualized representations to embed each city's description and use unsupervised methods to cluster these embeddings. Additionally, we compare results of our computational approach to similarity judgments generated by human readers. Our work is a first step towards incorporating natural language processing into literary criticism.
Most previous work within the NLP community applies distant reading @cite_1 to large collections of books, focusing on modeling different aspects of narratives such as plots and event sequences , characters , and narrative similarity . In the same vein, researchers in computational literary analysis have combined statistical techniques and linguistics theories to perform quantitative analysis on large narrative texts , but these attempts largely rely on techniques such as word counting, topic modeling, and naive Bayes classifiers and are therefore not able to capture the meaning of sentences or paragraphs . While these works discover general patterns from multiple literary works, we are the first to use cutting-edge NLP techniques to engage with specific literary criticism about a single narrative.
{ "cite_N": [ "@cite_1" ], "mid": [ "561535798" ], "abstract": [ "In this volume, Matthew L. Jockers introduces readers to large-scale literary computing and the revolutionary potential of macroanalysis--a new approach to the study of the literary record designed for probing the digital-textual world as it exists today, in digital form and in large quantities. Using computational analysis to retrieve key words, phrases, and linguistic patterns across thousands of texts in digital libraries, researchers can draw conclusions based on quantifiable evidence regarding how literary trends are employed over time, across periods, within regions, or within demographic groups, as well as how cultural, historical, and societ al linkages may bind individual authors, texts, and genres into an aggregate literary culture. Moving beyond the limitations of literary interpretation based on the \"close-reading\" of individual works, Jockers describes how this new method of studying large collections of digital material can help us to better understand and contextualize the individual works within those collections." ] }
1904.08103
2939696875
In this paper, we propose an efficient multi-scale geometric consistency guided multi-view stereo method for accurate and complete depth map estimation. We first present our basic multi-view stereo method with Adaptive Checkerboard sampling and Multi-Hypothesis joint view selection (ACMH). It leverages structured region information to sample better candidate hypotheses for propagation and infer the aggregation view subset at each pixel. For the depth estimation of low-textured areas, we further propose to combine ACMH with multi-scale geometric consistency guidance (ACMM) to obtain the reliable depth estimates for low-textured areas at coarser scales and guarantee that they can be propagated to finer scales. To correct the erroneous estimates propagated from the coarser scales, we present a novel detail restorer. Experiments on extensive datasets show our method achieves state-of-the-art performance, recovering the depth estimation not only in low-textured areas but also in details.
According to @cite_13 , MVS methods can be categorized into four groups, voxel-based methods @cite_0 @cite_20 @cite_5 , surface evolution based methods @cite_3 @cite_26 @cite_6 , patch-based methods @cite_14 @cite_11 @cite_32 and depth map based methods @cite_1 @cite_17 @cite_24 . The voxel-based methods are often constrained by their predefined voxel grid resolution. The surface evolution based methods depend on a good initial solution. As for the patch-based methods, its dependence on matched keypoints impairs the completeness of 3D models. The depth map based methods require estimating depth maps for all images and then fusing them into a unified 3D scene representation. A more detailed overview of MVS methods is presented in @cite_13 @cite_31 . Our method belongs to the last category and we only discuss the related PatchMatch Stereo approaches.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_1", "@cite_32", "@cite_3", "@cite_6", "@cite_0", "@cite_17", "@cite_24", "@cite_5", "@cite_31", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "2111240761", "2156116778", "2112174119", "2129404737", "2167085613", "2141690762", "2576483540", "2205172244", "2519683295", "2163309730", "2257063750", "2160014001", "2119213281", "2108417695" ], "abstract": [ "Boosted by the Middlebury challenge, the precision of dense multi-view stereovision methods has increased drastically in the past few years. Yet, most methods, although they perform well on this benchmark, are still inapplicable to large-scale data sets taken under uncontrolled conditions. In this paper, we propose a multi-view stereo pipeline able to deal at the same time with very large scenes while still producing highly detailed reconstructions within very reasonable time. The keys to these benefits are twofold: (i) a minimum s-t cut based global optimization that transforms a dense point cloud into a visibility consistent mesh, followed by (ii) a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization and adaptive resolution. Our method has been tested on numerous large-scale outdoor scenes. The accuracy of our reconstructions is also measured on the recent dense multi-view benchmark proposed by , showing our results to compare more than favorably with the current state-of-the-art.", "We present a multi-view stereo algorithm that addresses the extreme changes in lighting, scale, clutter, and other effects in large online community photo collections. Our idea is to intelligently choose images to match, both at a per-view and per-pixel level. We show that such adaptive view selection enables robust performance even with dramatic appearance variability. The stereo matching technique takes as input sparse 3D points reconstructed from structure-from-motion methods and iteratively grows surfaces from these points. Optimizing for surface normals within a photoconsistency measure significantly improves the matching results. While the focus of our approach is to estimate high-quality depth maps, we also show examples of merging the resulting depth maps into compelling scene reconstructions. We demonstrate our algorithm on standard multi-view stereo datasets and on casually acquired photo collections of famous scenes gathered from the Internet.", "We propose a multi-view depthmap estimation approach aimed at adaptively ascertaining the pixel level data associations between a reference image and all the elements of a source image set. Namely, we address the question, what aggregation subset of the source image set should we use to estimate the depth of a particular pixel in the reference image? We pose the problem within a probabilistic framework that jointly models pixel-level view selection and depthmap estimation given the local pairwise image photoconsistency. The corresponding graphical model is solved by EM-based view selection probability inference and PatchMatch-like depth sampling and propagation. Experimental results on standard multi-view benchmarks convey the state-of-the art estimation accuracy afforded by mitigating spurious pixel level data associations. Additionally, experiments on large Internet crowd sourced data demonstrate the robustness of our approach against unstructured and heterogeneous image capture characteristics. Moreover, the linear computational and storage requirements of our formulation, as well as its inherent parallelism, enables an efficient and scalable GPU-based implementation.", "This paper proposes a novel algorithm for multiview stereopsis that outputs a dense set of small rectangular patches covering the surfaces visible in the images. Stereopsis is implemented as a match, expand, and filter procedure, starting from a sparse set of matched keypoints, and repeatedly expanding these before using visibility constraints to filter away false matches. The keys to the performance of the proposed algorithm are effective techniques for enforcing local photometric consistency and global visibility constraints. Simple but effective methods are also proposed to turn the resulting patch model into a mesh which can be further refined by an algorithm that enforces both photometric consistency and regularization constraints. The proposed approach automatically detects and discards outliers and obstacles and does not require any initialization in the form of a visual hull, a bounding box, or valid depth ranges. We have tested our algorithm on various data sets including objects with fine surface details, deep concavities, and thin structures, outdoor scenes observed from a restricted set of viewpoints, and \"crowded\" scenes where moving obstacles appear in front of a static structure of interest. A quantitative evaluation on the Middlebury benchmark [1] shows that the proposed method outperforms all others submitted so far for four out of the six data sets.", "In this paper, we present a new approach to high quality 3D object reconstruction. Starting from a calibrated sequence of color images, the algorithm is able to reconstruct both the 3D geometry and the texture. The core of the method is based on a deformable model, which defines the framework where texture and silhouette information can be fused. This is achieved by defining two external forces based on the images: a texture driven force and a silhouette driven force. The texture force is computed in two steps: a multi-stereo correlation voting approach and a gradient vector flow diffusion. Due to the high resolution of the voting approach, a multi-grid version of the gradient vector flow has been developed. Concerning the silhouette force, a new formulation of the silhouette constraint is derived. It provides a robust way to integrate the silhouettes in the evolution algorithm. As a consequence, we are able to recover the contour generators of the model at the end of the iteration process. Finally, a texture map is computed from the original images for the reconstructed 3D model.", "We propose a convex formulation for silhouette and stereo fusion in 3D reconstruction from multiple images. The key idea is to show that the reconstruction problem can be cast as one of minimizing a convex functional, where the exact silhouette consistency is imposed as convex constraints that restrict the domain of feasible functions. As a consequence, we can retain the original stereo-weighted surface area as a cost functional without heuristic modifications of this energy by balloon terms or other strategies, yet still obtain meaningful (nonempty) reconstructions which are guaranteed to be silhouette-consistent. We prove that the proposed convex relaxation approach provides solutions that lie within a bound of the optimal solution. Compared to existing alternatives, the proposed method does not depend on initialization and leads to a simpler and more robust numerical scheme for imposing silhouette consistency obtained by projection onto convex sets. We show that this projection can be solved exactly using an efficient algorithm. We propose a parallel implementation of the resulting convex optimization problem on a graphics card. Given a photoconsistency map and a set of image silhouettes, we are able to compute highly accurate and silhouette-consistent reconstructions for challenging real-world data sets. In particular, experimental results demonstrate that the proposed silhouette constraints help to preserve fine-scale details of the reconstructed shape. Computation times depend on the resolution of the input imagery and vary between a few seconds and a couple of minutes for all experiments in this paper.", "We present a novel geometric approach for solving the stereo problem for an arbitrary number of images (greater than or equal to 2). It is based upon the definition of a variational principle that must be satisfied by the surfaces of the objects in the scene and their images. The Euler-Lagrange equations which are deduced from the variational principle provide a set of PDE's which are used to deform an initial set of surfaces which then move towards the objects to be detected. The level set implementation of these PDE's potentially provides an efficient and robust way of achieving the surface evolution and to deal automatically with changes in the surface topology during the deformation, i.e. to deal with multiple objects. Results of a two dimensional implementation of our theory are presented on synthetic and real images.", "We present a new, massively parallel method for high-quality multiview matching. Our work builds on the Patchmatch idea: starting from randomly generated 3D planes in scene space, the best-fitting planes are iteratively propagated and refined to obtain a 3D depth and normal field per view, such that a robust photo-consistency measure over all images is maximized. Our main novelties are on the one hand to formulate Patchmatch in scene space, which makes it possible to aggregate image similarity across multiple views and obtain more accurate depth maps. And on the other hand a modified, diffusion-like propagation scheme that can be massively parallelized and delivers dense multiview correspondence over ten 1.9-Megapixel images in 3 seconds, on a consumer-grade GPU. Our method uses a slanted support window and thus has no fronto-parallel bias, it is completely local and parallel, such that computation time scales linearly with image size, and inversely proportional to the number of parallel threads. Furthermore, it has low memory footprint (four values per pixel, independent of the depth range). It therefore scales exceptionally well and can handle multiple large images at high depth resolution. Experiments on the DTU and Middlebury multiview datasets as well as oblique aerial images show that our method achieves very competitive results with high accuracy and completeness, across a range of different scenarios.", "This work presents a Multi-View Stereo system for robust and efficient dense modeling from unstructured image collections. Our core contributions are the joint estimation of depth and normal information, pixelwise view selection using photometric and geometric priors, and a multi-view geometric consistency term for the simultaneous refinement and image-based depth and normal fusion. Experiments on benchmarks and large-scale Internet photo collections demonstrate state-of-the-art performance in terms of accuracy, completeness, and efficiency.", "We formulate multi-view 3D shape reconstruction as the computation of a minimum cut on the dual graph of a semi- regular, multi-resolution, tetrahedral mesh. Our method does not assume that the surface lies within a finite band around the visual hull or any other base surface. Instead, it uses photo-consistency to guide the adaptive subdivision of a coarse mesh of the bounding volume. This generates a multi-resolution volumetric mesh that is densely tesselated in the parts likely to contain the unknown surface. The graph-cut on the dual graph of this tetrahedral mesh produces a minimum cut corresponding to a triangulated surface that minimizes a global surface cost functional. Our method makes no assumptions about topology and can recover deep concavities when enough cameras observe them. Our formulation also allows silhouette constraints to be enforced during the graph-cut step to counter its inherent bias for producing minimal surfaces. Local shape refinement via surface deformation is used to recover details in the reconstructed surface. Reconstructions of the Multi- View Stereo Evaluation benchmark datasets and other real datasets show the effectiveness of our method.", "This tutorial presents a hands-on view of the field of multi-view stereo with a focus on practical algorithms. Multi-view stereo algorithms are able to construct highly detailed 3D models from images alone. They take a possibly very large set of images and construct a 3D plausible geometry that explains the images under some reasonable assumptions, the most important being scene rigidity. The tutorial frames the multiview stereo problem as an image geometry consistency optimization problem. It describes in detail its main two ingredients: robust implementations of photometric consistency measures, and efficient optimization algorithms. It then presents how these main ingredients are used by some of the most successful algorithms, applied into real applications, and deployed as products in the industry. Finally it describes more advanced approaches exploiting domain-specific knowledge such as structural priors, and gives an overview of the remaining challenges and future research directions.", "This paper presents a quantitative comparison of several multi-view stereo reconstruction algorithms. Until now, the lack of suitable calibrated multi-view image datasets with known ground truth (3D shape models) has prevented such direct comparisons. In this paper, we first survey multi-view stereo algorithms and compare them qualitatively using a taxonomy that differentiates their key properties. We then describe our process for acquiring and calibrating multiview image datasets with high-accuracy ground truth and introduce our evaluation methodology. Finally, we present the results of our quantitative comparison of state-of-the-art multi-view stereo reconstruction algorithms on six benchmark datasets. The datasets, evaluation details, and instructions for submitting new models are available online at http: vision.middlebury.edu mview.", "This paper presents a volumetric formulation for the multiview stereo problem which is amenable to a computationally tractable global optimization using Graph-cuts. Our approach is to seek the optimal partitioning of 3D space into two regions labeled as \"object\" and \"empty\" under a cost functional consisting of the following two terms: 1) A term that forces the boundary between the two regions to pass through photo-consistent locations; and 2) a ballooning term that inflates the \"object\" region. To take account of the effect of occlusion on the first term, we use an occlusion robust photo-consistency metric based on normalized cross correlation, which does not assume any geometric knowledge about the reconstructed object. The globally optimal 3D partitioning can be obtained as the minimum cut solution of a weighted graph.", "This paper proposes a quasi-dense approach to 3D surface model acquisition from uncalibrated images. First, correspondence information and geometry are computed based on new quasi-dense point features that are resampled subpixel points from a disparity map. The quasi-dense approach gives more robust and accurate geometry estimations than the standard sparse approach. The robustness is measured as the success rate of full automatic geometry estimation with all involved parameters fixed. The accuracy is measured by a fast gauge-free uncertainty estimation algorithm. The quasi-dense approach also works for more largely separated images than the sparse approach, therefore, it requires fewer images for modeling. More importantly, the quasi-dense approach delivers a high density of reconstructed 3D points on which a surface representation can be reconstructed. This fills the gap of insufficiency of the sparse approach for surface reconstruction, essential for modeling and visualization applications. Second, surface reconstruction methods from the given quasi-dense geometry are also developed. The algorithm optimizes new unified functionals integrating both 3D quasi-dense points and 2D image information, including silhouettes. Combining both 3D data and 2D images is more robust than the existing methods using only 2D information or only 3D data. An efficient bounded regularization method is proposed to implement the surface evolution by level-set methods. Its properties are discussed and proven for some cases. As a whole, a complete automatic and practical system of 3D modeling from raw images captured by hand-held cameras to surface representation is proposed. Extensive experiments demonstrate the superior performance of the quasi-dense approach with respect to the standard sparse approach in robustness, accuracy, and applicability." ] }
1904.08103
2939696875
In this paper, we propose an efficient multi-scale geometric consistency guided multi-view stereo method for accurate and complete depth map estimation. We first present our basic multi-view stereo method with Adaptive Checkerboard sampling and Multi-Hypothesis joint view selection (ACMH). It leverages structured region information to sample better candidate hypotheses for propagation and infer the aggregation view subset at each pixel. For the depth estimation of low-textured areas, we further propose to combine ACMH with multi-scale geometric consistency guidance (ACMM) to obtain the reliable depth estimates for low-textured areas at coarser scales and guarantee that they can be propagated to finer scales. To correct the erroneous estimates propagated from the coarser scales, we present a novel detail restorer. Experiments on extensive datasets show our method achieves state-of-the-art performance, recovering the depth estimation not only in low-textured areas but also in details.
In terms of efficiency, @cite_9 @cite_12 @cite_1 @cite_24 adopt the sequential propagation scheme. They alternatively perform upward downward propagation in odd iteration steps and perform leftward rightward propagation in even steps. To increase parallelism, @cite_12 selects an eighth of the image height (width) as the length of each scanline in the vertical (horizontal) propagation. However, the algorithm parallelism of sequential propagation is still proportional to the number of rows or columns of images. Then, Galliani al @cite_17 propose to leverage a checkerboard pattern to perform a diffusion-like propagation scheme. It allows to simultaneously update the status of half of the pixels in an image. However, they ignore good hypotheses should have priority in propagation.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_24", "@cite_12", "@cite_17" ], "mid": [ "1955069752", "2112174119", "2519683295", "2039286679", "2205172244" ], "abstract": [ "We present a Multi View Stereo approach for huge unstructured image datasets that can deal with large variations in surface sampling rate of single images. Our method reconstructs surface parts always in the best available resolution. It considers scaling not only for large scale differences, but also between arbitrary small ones for a weighted merging of the best partial reconstructions. We create depth maps with our GPU based depth map algorithm, that also performs normal optimization. It matches several images that are found with a heuristic image selection method, to a reference image. We remove outliers by comparing depth maps against each other with a fast but reliable GPU approach. Then, we merge the different reconstructions from depth maps in 3D space by selecting the best points and optimizing them with not selected points. Finally, we create the surface by using a Delaunay graph cut.", "We propose a multi-view depthmap estimation approach aimed at adaptively ascertaining the pixel level data associations between a reference image and all the elements of a source image set. Namely, we address the question, what aggregation subset of the source image set should we use to estimate the depth of a particular pixel in the reference image? We pose the problem within a probabilistic framework that jointly models pixel-level view selection and depthmap estimation given the local pairwise image photoconsistency. The corresponding graphical model is solved by EM-based view selection probability inference and PatchMatch-like depth sampling and propagation. Experimental results on standard multi-view benchmarks convey the state-of-the art estimation accuracy afforded by mitigating spurious pixel level data associations. Additionally, experiments on large Internet crowd sourced data demonstrate the robustness of our approach against unstructured and heterogeneous image capture characteristics. Moreover, the linear computational and storage requirements of our formulation, as well as its inherent parallelism, enables an efficient and scalable GPU-based implementation.", "This work presents a Multi-View Stereo system for robust and efficient dense modeling from unstructured image collections. Our core contributions are the joint estimation of depth and normal information, pixelwise view selection using photometric and geometric priors, and a multi-view geometric consistency term for the simultaneous refinement and image-based depth and normal fusion. Experiments on benchmarks and large-scale Internet photo collections demonstrate state-of-the-art performance in terms of accuracy, completeness, and efficiency.", ")# 1.790 1.722 1.626 1.602 1.092 1.179 1.099 Mean Rel. Error of LP+HF+CVF on Pixels of Other Methods ( 10 3)# 1.102 1.068 1.142 1.292 1.319 1.368", "We present a new, massively parallel method for high-quality multiview matching. Our work builds on the Patchmatch idea: starting from randomly generated 3D planes in scene space, the best-fitting planes are iteratively propagated and refined to obtain a 3D depth and normal field per view, such that a robust photo-consistency measure over all images is maximized. Our main novelties are on the one hand to formulate Patchmatch in scene space, which makes it possible to aggregate image similarity across multiple views and obtain more accurate depth maps. And on the other hand a modified, diffusion-like propagation scheme that can be massively parallelized and delivers dense multiview correspondence over ten 1.9-Megapixel images in 3 seconds, on a consumer-grade GPU. Our method uses a slanted support window and thus has no fronto-parallel bias, it is completely local and parallel, such that computation time scales linearly with image size, and inversely proportional to the number of parallel threads. Furthermore, it has low memory footprint (four values per pixel, independent of the depth range). It therefore scales exceptionally well and can handle multiple large images at high depth resolution. Experiments on the DTU and Middlebury multiview datasets as well as oblique aerial images show that our method achieves very competitive results with high accuracy and completeness, across a range of different scenarios." ] }
1904.08103
2939696875
In this paper, we propose an efficient multi-scale geometric consistency guided multi-view stereo method for accurate and complete depth map estimation. We first present our basic multi-view stereo method with Adaptive Checkerboard sampling and Multi-Hypothesis joint view selection (ACMH). It leverages structured region information to sample better candidate hypotheses for propagation and infer the aggregation view subset at each pixel. For the depth estimation of low-textured areas, we further propose to combine ACMH with multi-scale geometric consistency guidance (ACMM) to obtain the reliable depth estimates for low-textured areas at coarser scales and guarantee that they can be propagated to finer scales. To correct the erroneous estimates propagated from the coarser scales, we present a novel detail restorer. Experiments on extensive datasets show our method achieves state-of-the-art performance, recovering the depth estimation not only in low-textured areas but also in details.
According to the above propagation strategies, many view selection schemes are proposed to tackle the noise in the propagation process. In the diffusion-like propagation scheme, @cite_17 selects fixed @math views with the minimal @math matching costs. However, this leads to a bias due to different aggregation subsets for different hypotheses. In the sequential propagation, @cite_9 @cite_12 also ignore the pixelwise view selection by only demanding global view angles. To incorporate only useful neighboring views at each pixel, Zheng al @cite_1 first try to construct a probabilistic graphical model to jointly estimate depth maps and view selection. Further, Sch "o nberger al @cite_24 introduce geometric priors and temporal smoothness to better depict the state-transition probability. However, this sequential inference needs to condition the status of previous pixels at the current state. It is still more sensitive to noise in low-textured areas.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_24", "@cite_12", "@cite_17" ], "mid": [ "1955069752", "2112174119", "2519683295", "2039286679", "2205172244" ], "abstract": [ "We present a Multi View Stereo approach for huge unstructured image datasets that can deal with large variations in surface sampling rate of single images. Our method reconstructs surface parts always in the best available resolution. It considers scaling not only for large scale differences, but also between arbitrary small ones for a weighted merging of the best partial reconstructions. We create depth maps with our GPU based depth map algorithm, that also performs normal optimization. It matches several images that are found with a heuristic image selection method, to a reference image. We remove outliers by comparing depth maps against each other with a fast but reliable GPU approach. Then, we merge the different reconstructions from depth maps in 3D space by selecting the best points and optimizing them with not selected points. Finally, we create the surface by using a Delaunay graph cut.", "We propose a multi-view depthmap estimation approach aimed at adaptively ascertaining the pixel level data associations between a reference image and all the elements of a source image set. Namely, we address the question, what aggregation subset of the source image set should we use to estimate the depth of a particular pixel in the reference image? We pose the problem within a probabilistic framework that jointly models pixel-level view selection and depthmap estimation given the local pairwise image photoconsistency. The corresponding graphical model is solved by EM-based view selection probability inference and PatchMatch-like depth sampling and propagation. Experimental results on standard multi-view benchmarks convey the state-of-the art estimation accuracy afforded by mitigating spurious pixel level data associations. Additionally, experiments on large Internet crowd sourced data demonstrate the robustness of our approach against unstructured and heterogeneous image capture characteristics. Moreover, the linear computational and storage requirements of our formulation, as well as its inherent parallelism, enables an efficient and scalable GPU-based implementation.", "This work presents a Multi-View Stereo system for robust and efficient dense modeling from unstructured image collections. Our core contributions are the joint estimation of depth and normal information, pixelwise view selection using photometric and geometric priors, and a multi-view geometric consistency term for the simultaneous refinement and image-based depth and normal fusion. Experiments on benchmarks and large-scale Internet photo collections demonstrate state-of-the-art performance in terms of accuracy, completeness, and efficiency.", ")# 1.790 1.722 1.626 1.602 1.092 1.179 1.099 Mean Rel. Error of LP+HF+CVF on Pixels of Other Methods ( 10 3)# 1.102 1.068 1.142 1.292 1.319 1.368", "We present a new, massively parallel method for high-quality multiview matching. Our work builds on the Patchmatch idea: starting from randomly generated 3D planes in scene space, the best-fitting planes are iteratively propagated and refined to obtain a 3D depth and normal field per view, such that a robust photo-consistency measure over all images is maximized. Our main novelties are on the one hand to formulate Patchmatch in scene space, which makes it possible to aggregate image similarity across multiple views and obtain more accurate depth maps. And on the other hand a modified, diffusion-like propagation scheme that can be massively parallelized and delivers dense multiview correspondence over ten 1.9-Megapixel images in 3 seconds, on a consumer-grade GPU. Our method uses a slanted support window and thus has no fronto-parallel bias, it is completely local and parallel, such that computation time scales linearly with image size, and inversely proportional to the number of parallel threads. Furthermore, it has low memory footprint (four values per pixel, independent of the depth range). It therefore scales exceptionally well and can handle multiple large images at high depth resolution. Experiments on the DTU and Middlebury multiview datasets as well as oblique aerial images show that our method achieves very competitive results with high accuracy and completeness, across a range of different scenarios." ] }
1904.08103
2939696875
In this paper, we propose an efficient multi-scale geometric consistency guided multi-view stereo method for accurate and complete depth map estimation. We first present our basic multi-view stereo method with Adaptive Checkerboard sampling and Multi-Hypothesis joint view selection (ACMH). It leverages structured region information to sample better candidate hypotheses for propagation and infer the aggregation view subset at each pixel. For the depth estimation of low-textured areas, we further propose to combine ACMH with multi-scale geometric consistency guidance (ACMM) to obtain the reliable depth estimates for low-textured areas at coarser scales and guarantee that they can be propagated to finer scales. To correct the erroneous estimates propagated from the coarser scales, we present a novel detail restorer. Experiments on extensive datasets show our method achieves state-of-the-art performance, recovering the depth estimation not only in low-textured areas but also in details.
Although some methods focus on view selection to improve local smoothness and gain some benefits, they are still restricted by patch window size. To perceive more useful information in low-textured areas, Wei al @cite_12 adopt the multi-scale patch matching with variance based consistency. However, this consistency is too strong to spread some reliable estimates in few neighboring views across multiple views. Moreover, they overlook the errors in details.
{ "cite_N": [ "@cite_12" ], "mid": [ "2039286679" ], "abstract": [ ")# 1.790 1.722 1.626 1.602 1.092 1.179 1.099 Mean Rel. Error of LP+HF+CVF on Pixels of Other Methods ( 10 3)# 1.102 1.068 1.142 1.292 1.319 1.368" ] }
1904.08398
2939507640
We present, to our knowledge, the first application of BERT to document classification. A few characteristics of the task might lead one to think that BERT is not the most appropriate model: syntactic structures matter less for content categories, documents can often be longer than typical BERT input, and documents often have multiple labels. Nevertheless, we show that a straightforward classification model using BERT is able to achieve the state of the art across four popular datasets. To address the computational expense associated with BERT inference, we distill knowledge from BERT-large to small bidirectional LSTMs, reaching BERT-base parity on multiple datasets using 30x fewer parameters. The primary contribution of our paper is improved baselines that can provide the foundation for future work.
Over the last few years, neural network-based architectures have achieved state of the art for document classification. develop XMLCNN for addressing this problem's multi-label nature, which they call extreme classification. XMLCNN is based on the popular KimCNN @cite_0 , except with wider convolutional filters, adaptive dynamic max-pooling @cite_7 @cite_5 , and an additional bottleneck layer to better capture the features of large documents. Another popular model, Hierarchical Attention Network (HAN; yang2016hierarchical ) explicitly models hierarchical information from documents to extract meaningful features, incorporating word- and sentence-level encoders (with attention) to classify documents. propose a generative approach for multi-label document classification, using encoder--decoder sequence generation models (SGMs) for generating labels for each document. Contrary to the previous papers, propose , a , properly regularized single-layer BiLSTM, which represents current state of the art.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_7" ], "mid": [ "1832693441", "", "2250999640" ], "abstract": [ "We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.", "", "Traditional approaches to the task of ACE event extraction primarily rely on elaborately designed features and complicated natural language processing (NLP) tools. These traditional approaches lack generalization, take a large amount of human effort and are prone to error propagation and data sparsity problems. This paper proposes a novel event-extraction method, which aims to automatically extract lexical-level and sentence-level features without using complicated NLP tools. We introduce a word-representation model to capture meaningful semantic regularities for words and adopt a framework based on a convolutional neural network (CNN) to capture sentence-level clues. However, CNN can only capture the most important information in a sentence and may miss valuable facts when considering multiple-event sentences. We propose a dynamic multi-pooling convolutional neural network (DMCNN), which uses a dynamic multi-pooling layer according to event triggers and arguments, to reserve more crucial information. The experimental results show that our approach significantly outperforms other state-of-the-art methods." ] }
1904.08112
2939662953
A locally decodable code (LDC) C: 0,1 ^k -> 0,1 ^n is an error correcting code wherein individual bits of the message can be recovered by only querying a few bits of a noisy codeword. LDCs found a myriad of applications both in theory and in practice, ranging from probabilistically checkable proofs to distributed storage. However, despite nearly two decades of extensive study, the best known constructions of O(1)-query LDCs have super-polynomial blocklength. The notion of relaxed LDCs is a natural relaxation of LDCs, which aims to bypass the foregoing barrier by requiring local decoding of nearly all individual message bits, yet allowing decoding failure (but not error) on the rest. State of the art constructions of O(1)-query relaxed LDCs achieve blocklength n = O(k^ 1+ ) for an arbitrarily small constant . We prove a lower bound which shows that O(1)-query relaxed LDCs cannot achieve blocklength n = k^ 1+ o(1) . This resolves an open problem raised by Goldreich in 2004.
There is an extensive literature that is concerned with lower bounds on (non-relaxed) locally decodable codes in various regimes (see, e.g., @cite_7 @cite_9 @cite_10 @cite_11 @cite_27 @cite_32 @cite_26 ), as well as for the closely related notion of locally correctable codes (see, e.g., @cite_6 @cite_20 @cite_5 @cite_3 ), in which the goal is to correct a bit of the codeword rather than a bit of the message. We stress that none of the aforementioned bounds apply for (see discussion in sec:challange ).
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_9", "@cite_32", "@cite_6", "@cite_3", "@cite_27", "@cite_5", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2144581009", "1967703498", "2107342895", "2120282110", "2145012723", "2786800728", "2770063945", "2550164695", "2150884088", "2010510058", "1555149635" ], "abstract": [ "A linear (q, δ, ϵ,m(n))-locally decodable code (LDC) C : ( F ) n → ( F ) m(n) is a linear transformation from the vector space ( F ) n to the space ( F ) m(n) for which each message symbol x i can be recovered with probability at least ( 1 | F | + ) from C(x) by a randomized algorithm that queries only q positions of C(x), even if up to δm(n) positions of C(x) are corrupted. In a recent work of Dvir, the author shows that lower bounds for linear LDCs can imply lower bounds for arithmetic circuits. He suggests that proving lower bounds for LDCs over the complex or real field is a good starting point for approaching one of his conjectures. Our main result is an m(n) = Ω(n 2) lower bound for linear 3-query LDCs over any, possibly infinite, field. The constant in the Ω(·) depends only on e and δ. This is the first lower bound better than the trivial m(n) = Ω(n) for arbitrary fields and more than two queries.", "", "An error-correcting code is said to be locally decodable if a randomized algorithm can recover any single bit of a message by reading only a small number of symbols of a possibly corrupted encoding of the message. Katz and Trevisan (2000) showed that any such code C: 0, 1 spl rarr spl Sigma sup m with a decoding algorithm that makes at most q probes must satisfy m = spl Omega ((n log | spl Sigma |) sup q (q-1) ). They assumed that the decoding algorithm is non-adaptive, and left open the question of proving similar bounds for adaptive decoders. We improve the results of Katz and Trevisan (2000) in two ways. First, we give a more direct proof of their result. Second, and this is our main result, we prove that m = spl Omega ((n log| spl Sigma |) sup q (q-1) ) even if the decoding algorithm is adaptive. An important ingredient of our proof is a randomized method for smoothing an adaptive decoding algorithm. The main technical tool we employ is the Second Moment Method.", "We prove new lower bounds for locally decodable codes and private information retrieval. We show that a 2-query LDC encoding n-bit strings over an l-bit alphabet, where the decoder only uses b bits of each queried position, needs code length @math Similarly, a 2-server PIR scheme with an n-bit database and t-bit queries, where the user only needs b bits from each of the two l-bit answers, unknown to the servers, satisfies @math This implies that several known PIR schemes are close to optimal. Our results generalize those of [8], who proved roughly the same bounds for linear LDCs and PIRs. Like earlier work by Kerenidis and de Wolf [12], our classical bounds are proved using quantum computational techniques. In particular, we give a tight analysis of how well a 2-input function can be computed from a quantum superposition of both inputs.", "A (q,k,t)-design matrix is an m x n matrix whose pattern of zeros non-zeros satisfies the following design-like condition: each row has at most q non-zeros, each column has at least k non-zeros and the supports of every two columns intersect in at most t rows. We prove that for m ≥ n, the rank of any (q,k,t)-design matrix over a field of characteristic zero (or sufficiently large finite characteristic) is at least n - (qtn 2k)2. Using this result we derive the following applications: Impossibility results for 2-query LCCs over large fields: A 2-query locally correctable code (LCC) is an error correcting code in which every codeword coordinate can be recovered, probabilistically, by reading at most two other code positions. Such codes have numerous applications and constructions (with exponential encoding length) are known over finite fields of small characteristic. We show that infinite families of such linear 2-query LCCs do not exist over fields of characteristic zero or large characteristic regardless of the encoding length. Generalization of known results in combinatorial geometry: We prove a quantitative analog of the Sylvester-Gallai theorem: Let v1,...,vm be a set of points in Cd such that for every i ∈ [m] there exists at least δ m values of j ∈ [m] such that the line through vi,vj contains a third point in the set. We show that the dimension of v1,...,vm is at most O(1 δ2). Our results generalize to the high-dimensional case (replaceing lines with planes, etc.) and to the case where the points are colored (as in the Motzkin-Rabin Theorem).", "", "A locally decodable code (LDC) encodes n-bit strings x in m-bit codewords C(x) in such a way that one can recover any bit xi from a corrupted codeword by querying only a few bits of that word. We use a quantum argument to prove that LDCs with 2 classical queries require exponential length: m = 2Ω(n). Previously, this was known only for linear codes (, in: Proceedings of 17th IEEE Conference on Computation Complexity, 2002, pp. 175-183). The proof proceeds by showing that a 2-query LDC can be decoded with a single quantum query, when defined in an appropriate sense. It goes on to establish an exponential lower bound on any 'l-query locally quantum-decodable code'. We extend our lower bounds to non-binary alphabets and also somewhat improve the polynomial lower bounds by Katz and Trevisan for LDCs with more than 2 queries. Furthermore, we show that q quantum queries allow more succinct LDCs than the best known LDCs with q classical queries. Finally, we give new classical lower bounds and quantum upper bounds for the setting of private information retrieval. In particular, we exhibit a quantum 2-server private information retrieval (PIR) scheme with O(n3 10) qubits of communication, beating the O(n1 3) bits of communication of the best known classical 2-server PIR.", "A locally correctable code (LCC) is an error correcting code that allows correction of any arbitrary coordinate of a corrupted codeword by querying only a few coordinates. We show that any zero-error @math -query locally correctable code @math that can correct a constant fraction of corrupted symbols must have @math . We say that an LCC is zero-error if there exists a non-adaptive corrector algorithm that succeeds with probability @math when the input is an uncorrupted codeword. All known constructions of LCCs are zero-error. Our result is tight upto constant factors in the exponent. The only previous lower bound on the length of 2-query LCCs over large alphabet was @math due to Katz and Trevisan (STOC 2000). Our bound implies that zero-error LCCs cannot yield @math -server private information retrieval (PIR) schemes with sub-polynomial communication. Since there exists a @math -server PIR scheme with sub-polynomial communication (STOC 2015) based on a zero-error @math -query locally decodable code (LDC), we also obtain a separation between LDCs and LCCs over large alphabet. For our proof of the result, we need a new decomposition lemma for directed graphs that may be of independent interest. Given a dense directed graph @math , our decomposition uses the directed version of Szemeredi regularity lemma due to Alon and Shapira (STOC 2003) to partition almost all of @math into a constant number of subgraphs which are either edge-expanding or empty.", "We prove that if a linear error-correcting code C: 0, 1 sup n spl rarr 0, 1 sup m is such that a bit of the message can be probabilistically reconstructed by looking at two entries of a corrupted codeword, then m = 2 sup spl Omega (n) . We also present several extensions of this result. We show a reduction from the complexity, of one-round, information-theoretic private information retrieval systems (with two servers) to locally decodable codes, and conclude that if all the servers' answers are linear combinations of the database content, then t = spl Omega (n 2 sup a ), where t is the length of the user's query and a is the length of the servers' answers. Actually, 2 sup a can be replaced by O(a sup k ), where k is the number of bit locations in the answer that are actually inspected in the reconstruction.", "A Locally Correctable Code (LCC) is an error correcting code that has a probabilistic self-correcting algorithm that, with high probability, can correct any coordinate of the codeword by looking at only a few other coordinates, even if a fraction I´ of the coordinates are corrupted. LCCs are a stronger form of LDCs (Locally Decodable Codes) which have received a lot of attention recently due to their many applications and surprising constructions. In this work we show a separation between 2-query LDCs and LCCs over finite fields of prime order. Specifically, we prove a lower bound of the form p^ I©(I´d) on the length of linear 2-query LCCs over @math , that encode messages of length d. Our bound improves over the known bound of $2^ I©(I´d) GKST06, KdW04, DS07 which is tight for LDCs. Our proof makes use of tools from additive combinatorics which have played an important role in several recent results in theoretical computer science. Corollaries of our main theorem are new incidence geometry results over finite fields. The first is an improvement to the Sylvester-Gallai theorem over finite fields SS10 and the second is a new analog of Beck's theorem over finite fields.", "This paper presents essentially optimal lower bounds on the size of linear codes C : 0, 1 n ? 0, 1 m which have the property that, for constants ?, ? > 0, any bit of the message can be recovered with probability 1 2 + ? by an algorithm reading only 2 bits of a codeword corrupted in up to ?m positions. Such codes are known to be applicable to, among other things, the construction and analysis of information-theoretically secure private information retrieval schemes. In this work, we show that m must be at least 2?(? 1-2? n). Our results extend work by Goldreich, Karloff, Schulman, and Trevisan [GKST02], which is based heavily on methods developed by Katz and Trevisan [KT00]. The key to our improved bounds is an analysis which bypasses an intermediate reduction used in both prior works. The resulting improvement in the efficiency of the overall analysis is sufficient to achieve a lower bound optimal within a constant factor in the exponent. A construction of a locally decodable linear code matching this bound is presented." ] }
1904.08112
2939662953
A locally decodable code (LDC) C: 0,1 ^k -> 0,1 ^n is an error correcting code wherein individual bits of the message can be recovered by only querying a few bits of a noisy codeword. LDCs found a myriad of applications both in theory and in practice, ranging from probabilistically checkable proofs to distributed storage. However, despite nearly two decades of extensive study, the best known constructions of O(1)-query LDCs have super-polynomial blocklength. The notion of relaxed LDCs is a natural relaxation of LDCs, which aims to bypass the foregoing barrier by requiring local decoding of nearly all individual message bits, yet allowing decoding failure (but not error) on the rest. State of the art constructions of O(1)-query relaxed LDCs achieve blocklength n = O(k^ 1+ ) for an arbitrarily small constant . We prove a lower bound which shows that O(1)-query relaxed LDCs cannot achieve blocklength n = k^ 1+ o(1) . This resolves an open problem raised by Goldreich in 2004.
Another related notion is that of @cite_21 , which are, loosely speaking, codes for which there exists a probabilistic algorithm that accepts valid codewords, and rejects inputs that are far'' in Hamming distance from any codeword, while only probing a small fraction of the input. Much stronger upper bounds are known for locally testable codes than for LDCs, and in particular, there exists @math -local LTCs with blocklength @math @cite_21 (see also @cite_22 @cite_4 ). It is also known that LDCs do imply locally testable codes and vice versa @cite_12 .
{ "cite_N": [ "@cite_21", "@cite_4", "@cite_22", "@cite_12" ], "mid": [ "2022381972", "1966671033", "2057157452", "2889088074" ], "abstract": [ "We initiate a systematic study of locally testable codes; that is, error-correcting codes that admit very efficient membership tests. Specifically, these are codes accompanied with tests that make a constant number of (random) queries into any given word and reject non-codewords with probability proportional to their distance from the code.Locally testable codes are believed to be the combinatorial core of PCPs. However, the relation is less immediate than commonly believed. Nevertheless, we show that certain PCP systems can be modified to yield locally testable codes. On the other hand, we adapt techniques that we develop for the construction of the latter to yield new PCPs.Our main results are locally testable codes and PCPs of almost-linear length. Specifically, we prove the existence of the following constructs:---Locally testable binary (linear) codes in which k information bits are encoded by a codeword of length k ṡ exp(O(√(log k))). This improves over previous results that either yield codewords of exponential length or obtained almost quadratic length codewords for sufficiently large nonbinary alphabet.---PCP systems of almost-linear length for SAT. The length of the proof is n ṡ exp(O(√(log n))) and verification in performed by a constant number (i.e., 19) of queries, as opposed to previous results that used proof length n(1 p O(1 q)) for verification by q queries.The novel techniques in use include a random projection of certain codewords and PCP-oracles that preserves local-testability, an adaptation of PCP constructions to obtain “linear PCP-oracles” for proving conjunctions of linear conditions, and design of PCPs with some new soundness properties---a direct construction of locally testable (linear) codes of subexponential length.", "An error-correcting code C is called (q, ϵ)-strong locally testable code (LTC) if there exists a tester that makes at most q queries to the input word. This tester accepts all code words with probability 1 and rejects all non-code words x with probability at least ϵ · δ(x, C), where δ(x, C) denotes the relative Hamming distance between the word x and the code C. The parameter q is called the query complexity and the parameter ϵ is called soundness. In this paper we resolve an open question raised by Gold Reich and Sudan (J. ACM 2006) and construct binary linear strong LTCs with query complexity 3, constant relative distance, constant soundness and inverse polylogarithmic rate. Our result is based on the previous paper of the author (Vide man, ECCC TR12-168), which presented binary linear strong LTCs with query complexity 3, constant relative distance, and inverse polylogarithmic soundness and rate. We show that the \"gap amplification\" procedure of Dinur (J. ACM 2007) can be used to amplify the soundness of these strong LTCs from inverse polylogarithmic up to a constant, while preserving the other parameters of these codes. Furthermore, we show that under a conceivable conjecture, there exist asymptotically good strong LTCs with poly-log query complexity.", "An error correcting code is said to be locally testable if there is a test that checks whether a given string is a codeword, or rather far from the code, by reading only a constant number of symbols of the string. While the best known construction of locally testable codes (LTCs) by Ben-Sasson and Sudan [SIAM J. Comput., 38 (2008), pp. 551-607] and Dinur [J. ACM, 54 (2007), article 12] achieves very efficient parameters, it relies heavily on algebraic tools and on probabilistically checkable proof (PCP) machinery. In this work we present a new and arguably simpler construction of LTCs that is purely combinatorial, does not rely on PCP machinery, and matches the parameters of the best known construction. However, unlike the latter construction, our construction is not entirely explicit.", "" ] }
1904.08265
2951837757
In this paper, we present a novel unsupervised video summarization model that requires no manual annotation. The proposed model termed Cycle-SUM adopts a new cycle-consistent adversarial LSTM architecture that can effectively maximize the information preserving and compactness of the summary video. It consists of a frame selector and a cycle-consistent learning based evaluator. The selector is a bi-direction LSTM network that learns video representations that embed the long-range relationships among video frames. The evaluator defines a learnable information preserving metric between original video and summary video and "supervises" the selector to identify the most informative frames to form the summary video. In particular, the evaluator is composed of two generative adversarial networks (GANs), in which the forward GAN is learned to reconstruct original video from summary video while the backward GAN learns to invert the processing. The consistency between the output of such cycle learning is adopted as the information preserving metric for video summarization. We demonstrate the close relation between mutual information maximization and such cycle learning procedure. Experiments on two video summarization benchmark datasets validate the state-of-the-art performance and superiority of the Cycle-SUM model over previous baselines.
Supervised video summarization approaches leverage videos with human annotation on frame importance to train models. For example, formulate video summarization as a supervised subset selection problem and propose a sequential determinantal point processing (seqDPP) based model to sample a representative and diverse subset from training data @cite_19 . To relieve human annotation burden and reduce cost, unsupervised approaches, which have received increasing attention, generally design different criteria to give importance ranking over frames for selection. For example, @cite_30 @cite_29 propose to select frames according to their content relevance. @cite_13 @cite_27 design unsupervised critera by trying to reconstruct the original video from selected key frames and key shots under the dictionary learning framework. Clustering-based models @cite_10 @cite_3 and attention-based models @cite_0 @cite_28 are also developed to select key frames.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_10", "@cite_29", "@cite_3", "@cite_0", "@cite_19", "@cite_27", "@cite_13" ], "mid": [ "2095536970", "", "1987366351", "", "1985004765", "2006180404", "2115857089", "2105174364", "2072551889" ], "abstract": [ "With the explosive growth of web videos on the Internet, it becomes challenging to efficiently browse hundreds or even thousands of videos. When searching an event query, users are often bewildered by the vast quantity of web videos returned by search engines. Exploring such results will be time consuming and it will also degrade user experience. In this paper, we present an approach for event driven web video summarization by tag localization and key-shot mining. We first localize the tags that are associated with each video into its shots. Then, we estimate the relevance of the shots with respect to the event query by matching the shot-level tags with the query. After that, we identify a set of key-shots from the shots that have high relevance scores by exploring the repeated occurrence characteristic of key sub-events. Following the scheme in [6] and [22], we provide two types of summaries, i.e., threaded video skimming and visual-textual storyboard. Experiments are conducted on a corpus that contains 60 queries and more than 10 000 web videos. The evaluation demonstrates the effectiveness of the proposed approach.", "", "The fast evolution of digital video has brought many new multimedia applications and, as a consequence, has increased the amount of research into new technologies that aim at improving the effectiveness and efficiency of video acquisition, archiving, cataloging and indexing, as well as increasing the usability of stored videos. Among possible research areas, video summarization is an important topic that potentially enables faster browsing of large video collections and also more efficient content indexing and access. Essentially, this research area consists of automatically generating a short summary of a video, which can either be a static summary or a dynamic summary. In this paper, we present VSUMM, a methodology for the production of static video summaries. The method is based on color feature extraction from video frames and k-means clustering algorithm. As an additional contribution, we also develop a novel approach for the evaluation of video static summaries. In this evaluation methodology, video summaries are manually created by users. Then, several user-created summaries are compared both to our approach and also to a number of different techniques in the literature. Experimental results show - with a confidence level of 98 - that the proposed solution provided static video summaries with superior quality relative to the approaches to which it was compared.", "", "Key frame based video summarization has emerged as an important area of research for the multimedia community. Video key frames enable an user to access any video in a friendly and meaningful way. In this paper, we propose an automated method of video key frame extraction using dynamic Delaunay graph clustering via an iterative edge pruning strategy. A structural constraint in form of a lower limit on the deviation ratio of the graph vertices further improves the video summary. We also employ an information-theoretic pre-sampling where significant valleys in the mutual information profile of the successive frames in a video are used to capture more informative frames. Various video key frame visualization techniques for efficient video browsing and navigation purposes are incorporated. A comprehensive evaluation on 100 videos from the Open Video and YouTube databases using both objective and subjective measures demonstrate the superiority of our key frame extraction method.", "Automatic generation of video summarization is one of the key techniques in video management and browsing. In this paper, we present a generic framework of video summarization based on the modeling of viewer's attention. Without fully semantic understanding of video content, this framework takes advantage of understanding of video content, this framework takes advantage of computational attention models and eliminates the needs of complex heuristic rules in video summarization. A set of methods of audio-visual attention model features are proposed and presented. The experimental evaluations indicate that the computational attention based approach is an effective alternative to video semantic analysis for video summarization.", "Video summarization is a challenging problem with great application potential. Whereas prior approaches, largely unsupervised in nature, focus on sampling useful frames and assembling them as summaries, we consider video summarization as a supervised subset selection problem. Our idea is to teach the system to learn from human-created summaries how to select informative and diverse subsets, so as to best meet evaluation metrics derived from human-perceived quality. To this end, we propose the sequential determinantal point process (seqDPP), a probabilistic model for diverse sequential subset selection. Our novel seqDPP heeds the inherent sequential structures in video data, thus overcoming the deficiency of the standard DPP, which treats video frames as randomly permutable items. Meanwhile, seqDPP retains the power of modeling diverse subsets, essential for summarization. Our extensive results of summarizing videos from 3 datasets demonstrate the superior performance of our method, compared to not only existing unsupervised methods but also naive applications of the standard DPP model.", "The rapid growth of consumer videos requires an effective and efficient content summarization method to provide a user-friendly way to manage and browse the huge amount of video data. Compared with most previous methods that focus on sports and news videos, the summarization of personal videos is more challenging because of its unconstrained content and the lack of any pre-imposed video structures. We formulate video summarization as a novel dictionary selection problem using sparsity consistency, where a dictionary of key frames is selected such that the original video can be best reconstructed from this representative dictionary. An efficient global optimization algorithm is introduced to solve the dictionary selection model with the convergence rates as O(1 K2) (where K is the iteration counter), in contrast to traditional sub-gradient descent methods of O(1 √K). Our method provides a scalable solution for both key frame extraction and video skim generation, because one can select an arbitrary number of key frames to represent the original videos. Experiments on a human labeled benchmark dataset and comparisons to the state-of-the-art methods demonstrate the advantages of our algorithm.", "The rapid growth of video data demands both effective and efficient video summarization methods so that users are empowered to quickly browse and comprehend a large amount of video content. In this paper, we formulate the video summarization task with a novel minimum sparse reconstruction (MSR) problem. That is, the original video sequence can be best reconstructed with as few selected keyframes as possible. Different from the recently proposed convex relaxation based sparse dictionary selection method, our proposed method utilizes the true sparse constraint L0 norm, instead of the relaxed constraint L 2 , 1 norm, such that keyframes are directly selected as a sparse dictionary that can well reconstruct all the video frames. An on-line version is further developed owing to the real-time efficiency of the proposed MSR principle. In addition, a percentage of reconstruction (POR) criterion is proposed to intuitively guide users in obtaining a summary with an appropriate length. Experimental results on two benchmark datasets with various types of videos demonstrate that the proposed methods outperform the state of the art. HighlightsA minimum sparse reconstruction (MSR) based video summarization (VS) model is constructed.An L0 norm based constraint is imposed to ensure real sparsity.Two efficient and effective MSR based VS algorithms are proposed for off-line and on-line applications, respectively.A scalable strategy is designed to provide flexibility for practical applications." ] }
1904.08008
2936000158
Detecting objects in aerial images is challenging for at least two reasons: (1) target objects like pedestrians are very small in terms of pixels, making them hard to be distinguished from surrounding background; and (2) targets are in general very sparsely and nonuniformly distributed, making the detection very inefficient. In this paper we address both issues inspired by the observation that these targets are often clustered. In particular, we propose a Clustered Detection (ClusDet) network that unifies object cluster and detection in an end-to-end framework. The key components in ClusDet include a cluster proposal sub-network (CPNet), a scale estimation sub-network (ScaleNet), and a dedicated detection network (DetecNet). Given an input image, CPNet produces (object) cluster regions and ScaleNet estimates object scales for these regions. Then, each scale-normalized cluster region and their features are fed into DetecNet for object detection. Compared with previous solutions, ClusDet has several advantages: (1) it greatly reduces the number of blocks for final object detection and hence achieves high running time efficiency, (2) the cluster-based scale estimation is more accurate than previously used single-object based ones, hence effectively improves the detection for small objects, and (3) the final DetecNet is dedicated for clustered regions and implicitly models the prior context information so as to boost detection accuracy. The proposed method is tested on three representative aerial image datasets including VisDrone, UAVDT and DOTA. In all the experiments, ClusDet achieves promising performance in both efficiency and accuracy, in comparison with state-of-the-art detectors.
Generic Object Detection. Inspired by the success in image recognition @cite_0 , deep convolutional neural networks (CNNs) have been dominated in object detection. According to the detection pipeline, existing detectors can roughly be categorized into two types: region-based detectors and region-free detectors. The region-based detectors separate detection into two steps including proposal extraction and object detection. In the first stage, the search space for detection is significantly reduced through extracting candidate regions ( , proposals). In the second stage, these proposals are further classified into specific categories. Representatives of region-based detectors include R-CNN @cite_20 , Fast er R-CNN @cite_25 @cite_4 , Mask R-CNN @cite_1 and Cascade R-CNN @cite_9 . On the contrary, the region-free detectors, such as SSD @cite_24 YOLO @cite_5 , YOLO9000 @cite_10 , RetinaNet @cite_7 and RefineDet @cite_23 , perform detection without region proposal, which leads to high efficiency at the sacrifice of accuracy.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_9", "@cite_1", "@cite_0", "@cite_24", "@cite_23", "@cite_5", "@cite_10", "@cite_25", "@cite_20" ], "mid": [ "2613718673", "2743473392", "2964241181", "", "2163605009", "2193145675", "2963786238", "2963037989", "2570343428", "", "2102605133" ], "abstract": [ "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL", "In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code is available at https: github.com zhaoweicai cascade-rcnn.", "", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "For object detection, the two-stage approach (e.g., Faster R-CNN) has been achieving the highest accuracy, whereas the one-stage approach (e.g., SSD) has the advantage of high efficiency. To inherit the merits of both while overcoming their disadvantages, in this paper, we propose a novel single-shot based detector, called RefineDet, that achieves better accuracy than two-stage methods and maintains comparable efficiency of one-stage methods. RefineDet consists of two inter-connected modules, namely, the anchor refinement module and the object detection module. Specifically, the former aims to (1) filter out negative anchors to reduce search space for the classifier, and (2) coarsely adjust the locations and sizes of anchors to provide better initialization for the subsequent regressor. The latter module takes the refined anchors as the input from the former to further improve the regression accuracy and predict multi-class label. Meanwhile, we design a transfer connection block to transfer the features in the anchor refinement module to predict locations, sizes and class labels of objects in the object detection module. The multitask loss function enables us to train the whole network in an end-to-end way. Extensive experiments on PASCAL VOC 2007, PASCAL VOC 2012, and MS COCO demonstrate that RefineDet achieves state-of-the-art detection accuracy with high efficiency. Code is available at https: github.com sfzhang15 RefineDet.", "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.", "We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. Using a novel, multi-scale training method the same YOLOv2 model can run at varying sizes, offering an easy tradeoff between speed and accuracy. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that dont have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. YOLO9000 predicts detections for more than 9000 different object categories, all in real-time.", "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn." ] }
1904.08008
2936000158
Detecting objects in aerial images is challenging for at least two reasons: (1) target objects like pedestrians are very small in terms of pixels, making them hard to be distinguished from surrounding background; and (2) targets are in general very sparsely and nonuniformly distributed, making the detection very inefficient. In this paper we address both issues inspired by the observation that these targets are often clustered. In particular, we propose a Clustered Detection (ClusDet) network that unifies object cluster and detection in an end-to-end framework. The key components in ClusDet include a cluster proposal sub-network (CPNet), a scale estimation sub-network (ScaleNet), and a dedicated detection network (DetecNet). Given an input image, CPNet produces (object) cluster regions and ScaleNet estimates object scales for these regions. Then, each scale-normalized cluster region and their features are fed into DetecNet for object detection. Compared with previous solutions, ClusDet has several advantages: (1) it greatly reduces the number of blocks for final object detection and hence achieves high running time efficiency, (2) the cluster-based scale estimation is more accurate than previously used single-object based ones, hence effectively improves the detection for small objects, and (3) the final DetecNet is dedicated for clustered regions and implicitly models the prior context information so as to boost detection accuracy. The proposed method is tested on three representative aerial image datasets including VisDrone, UAVDT and DOTA. In all the experiments, ClusDet achieves promising performance in both efficiency and accuracy, in comparison with state-of-the-art detectors.
Despite excellent performance on natural images ( , 500 @math 400 images in PASCAL VOC @cite_29 and 600 @math 400 images in MS COCO @cite_22 ), these generic detectors are degenerated when applied on high-resolution aerial images ( , 2,000 @math 1,500 images in VisDrone @cite_17 ).
{ "cite_N": [ "@cite_29", "@cite_22", "@cite_17" ], "mid": [ "2037227137", "2952122856", "2798799804" ], "abstract": [ "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008---2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community's progress through time using the methods of (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.", "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.", "In this paper we present a large-scale visual object detection and tracking benchmark, named VisDrone2018, aiming at advancing visual understanding tasks on the drone platform. The images and video sequences in the benchmark were captured over various urban suburban areas of 14 different cities across China from north to south. Specifically, VisDrone2018 consists of 263 video clips and 10,209 images (no overlap with video clips) with rich annotations, including object bounding boxes, object categories, occlusion, truncation ratios, etc. With intensive amount of effort, our benchmark has more than 2.5 million annotated instances in 179,264 images video frames. Being the largest such dataset ever published, the benchmark enables extensive evaluation and investigation of visual analysis algorithms on the drone platform. In particular, we design four popular tasks with the benchmark, including object detection in images, object detection in videos, single object tracking, and multi-object tracking. All these tasks are extremely challenging in the proposed dataset due to factors such as occlusion, large scale and pose variation, and fast motion. We hope the benchmark largely boost the research and development in visual analysis on drone platforms." ] }
1904.08008
2936000158
Detecting objects in aerial images is challenging for at least two reasons: (1) target objects like pedestrians are very small in terms of pixels, making them hard to be distinguished from surrounding background; and (2) targets are in general very sparsely and nonuniformly distributed, making the detection very inefficient. In this paper we address both issues inspired by the observation that these targets are often clustered. In particular, we propose a Clustered Detection (ClusDet) network that unifies object cluster and detection in an end-to-end framework. The key components in ClusDet include a cluster proposal sub-network (CPNet), a scale estimation sub-network (ScaleNet), and a dedicated detection network (DetecNet). Given an input image, CPNet produces (object) cluster regions and ScaleNet estimates object scales for these regions. Then, each scale-normalized cluster region and their features are fed into DetecNet for object detection. Compared with previous solutions, ClusDet has several advantages: (1) it greatly reduces the number of blocks for final object detection and hence achieves high running time efficiency, (2) the cluster-based scale estimation is more accurate than previously used single-object based ones, hence effectively improves the detection for small objects, and (3) the final DetecNet is dedicated for clustered regions and implicitly models the prior context information so as to boost detection accuracy. The proposed method is tested on three representative aerial image datasets including VisDrone, UAVDT and DOTA. In all the experiments, ClusDet achieves promising performance in both efficiency and accuracy, in comparison with state-of-the-art detectors.
Aerial Image Detection. Compared to detection in natural images, aerial image detection is more challenging because (1) objects have small scales relative to the high-resolution aerial images and (2) targets are sparse and nonuniform and concentrated in certain regions. Since this work is focused on deep learning, we only review some relevant works using deep neural networks for aerial image detection. @cite_11 , a simple CNNs based approach is presented for automatic detection in aerial images. The method in @cite_28 integrates detection in aerial images with semantic segmentation to improve performance. @cite_26 , the authors directly extend the Fast er R-CNN @cite_25 @cite_4 for vehicle detection in aerial images. The work of @cite_3 proposes a coupled region-based CNNs for aerial vehicle detection. The approach of @cite_27 investigates the problem of misalignment between Region of Interests (RoI) and objects in aerial image detection, and introduces a ROI transformer to address this issue. The algorithm in @cite_19 presents a scale adaptive proposal network for object detection in aerial images.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_28", "@cite_3", "@cite_19", "@cite_27", "@cite_25", "@cite_11" ], "mid": [ "2612465689", "2613718673", "2606788270", "2613825824", "2910778976", "2902184095", "", "2314029052" ], "abstract": [ "Vehicle detection in aerial images is a crucial image processing step for many applications like screening of large areas. In recent years, several deep learning based frameworks have been proposed for object detection. However, these detectors were developed for datasets that considerably differ from aerial images. In this paper, we systematically investigate the potential of Fast R-CNN and Faster R-CNN for aerial images, which achieve top performing results on common detection benchmark datasets. Therefore, the applicability of 8 state-of-the-art object proposals methods used to generate a set of candidate regions and of both detectors is examined. Relevant adaptations of the object proposals methods are provided. To overcome shortcomings of the original approach in case of handling small instances, we further propose our own network that clearly outperforms state-of-the-art methods for vehicle detection in aerial images. All experiments are performed on two publicly available datasets to account for differing characteristics such as ground sampling distance, number of objects per image and varying backgrounds.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "Like computer vision before, remote sensing has been radically changed by the introduction of deep learning and, more notably, Convolution Neural Networks. Land cover classification, object detection and scene understanding in aerial images rely more and more on deep networks to achieve new state-of-the-art results. Recent architectures such as Fully Convolutional Networks can even produce pixel level annotations for semantic mapping. In this work, we present a deep-learning based segment-before-detect method for segmentation and subsequent detection and classification of several varieties of wheeled vehicles in high resolution remote sensing images. This allows us to investigate object detection and classification on a complex dataset made up of visually similar classes, and to demonstrate the relevance of such a subclass modeling approach. Especially, we want to show that deep learning is also suitable for object-oriented analysis of Earth Observation data as effective object detection can be obtained as a byproduct of accurate semantic segmentation. First, we train a deep fully convolutional network on the ISPRS Potsdam and the NZAM ONERA Christchurch datasets and show how the learnt semantic maps can be used to extract precise segmentation of vehicles. Then, we show that those maps are accurate enough to perform vehicle detection by simple connected component extraction. This allows us to study the repartition of vehicles in the city. Finally, we train a Convolutional Neural Network to perform vehicle classification on the VEDAI dataset, and transfer its knowledge to classify the individual vehicle instances that we detected.", "Vehicle detection in aerial images, being an interesting but challenging problem, plays an important role for a wide range of applications. Traditional methods are based on sliding-window search and handcrafted or shallow-learning-based features with heavy computational costs and limited representation power. Recently, deep learning algorithms, especially region-based convolutional neural networks (R-CNNs), have achieved state-of-the-art detection performance in computer vision. However, several challenges limit the applications of R-CNNs in vehicle detection from aerial images: 1) vehicles in large-scale aerial images are relatively small in size, and R-CNNs have poor localization performance with small objects; 2) R-CNNs are particularly designed for detecting the bounding box of the targets without extracting attributes; 3) manual annotation is generally expensive and the available manual annotation of vehicles for training R-CNNs are not sufficient in number. To address these problems, this paper proposes a fast and accurate vehicle detection framework. On one hand, to accurately extract vehicle-like targets, we developed an accurate-vehicle-proposal-network (AVPN) based on hyper feature map which combines hierarchical feature maps that are more accurate for small object detection. On the other hand, we propose a coupled R-CNN method, which combines an AVPN and a vehicle attribute learning network to extract the vehicle's location and attributes simultaneously. For original large-scale aerial images with limited manual annotations, we use cropped image blocks for training with data augmentation to avoid overfitting. Comprehensive evaluations on the public Munich vehicle dataset and the collected vehicle dataset demonstrate the accuracy and effectiveness of the proposed method.", "Object detection in aerial images is widely applied in many applications. In recent years, faster region convolutional neural network shows a great improvement on object detecting in natural images. Considering the size and distribution characteristic of object in remote sensing images, the region proposal network (RPN) should be changed before being adopted. In this letter, a scale adaptive proposal network (SAPNet) is proposed to improve the accuracy of multiobject detection in remote sensing images. The SAPNet consists of multilayer RPNs which are designed to generate multiscale object proposals, and a final detection subnetwork in which fusion feature layer has been applied for better multiobject detection. Comparative experimental results show that the proposed SAPNet significantly improves the accuracy of multiobject detection.", "Object detection in aerial images is an active yet challenging task in computer vision because of the birdview perspective, the highly complex backgrounds, and the variant appearances of objects. Especially when detecting densely packed objects in aerial images, methods relying on horizontal proposals for common object detection often introduce mismatches between the Region of Interests (RoIs) and objects. This leads to the common misalignment between the final object classification confidence and localization accuracy. Although rotated anchors have been used to tackle this problem, the design of them always multiplies the number of anchors and dramatically increases the computational complexity. In this paper, we propose a RoI Transformer to address these problems. More precisely, to improve the quality of region proposals, we first designed a Rotated RoI (RRoI) learner to transform a Horizontal Region of Interest (HRoI) into a Rotated Region of Interest (RRoI). Based on the RRoIs, we then proposed a Rotated Position Sensitive RoI Align (RPS-RoI-Align) module to extract rotation-invariant features from them for boosting subsequent classification and regression. Our RoI Transformer is with light weight and can be easily embedded into detectors for oriented object detection. A simple implementation of the RoI Transformer has achieved state-of-the-art performances on two common and challenging aerial datasets, i.e., DOTA and HRSC2016, with a neglectable reduction to detection speed. Our RoI Transformer exceeds the deformable Position Sensitive RoI pooling when oriented bounding-box annotations are available. Extensive experiments have also validated the flexibility and effectiveness of our RoI Transformer. The results demonstrate that it can be easily integrated with other detector architectures and significantly improve the performances.", "", "We are witnessing daily acquisition of large amounts of aerial and satellite imagery. Analysis of such large quantities of data can be helpful for many practical applications. In this letter, we present an automatic content-based analysis of aerial imagery in order to detect and mark arbitrary objects or regions in high-resolution images. For that purpose, we proposed a method for automatic object detection based on a convolutional neural network. A novel two-stage approach for network training is implemented and verified in the tasks of aerial image classification and object detection. First, we tested the proposed training approach using UCMerced data set of aerial images and achieved accuracy of approximately 98.6 . Second, the method for automatic object detection was implemented and verified. For implementation on GPGPU, a required processing time for one aerial image of size 5000 @math 5000 pixels was around 30 s." ] }
1904.08008
2936000158
Detecting objects in aerial images is challenging for at least two reasons: (1) target objects like pedestrians are very small in terms of pixels, making them hard to be distinguished from surrounding background; and (2) targets are in general very sparsely and nonuniformly distributed, making the detection very inefficient. In this paper we address both issues inspired by the observation that these targets are often clustered. In particular, we propose a Clustered Detection (ClusDet) network that unifies object cluster and detection in an end-to-end framework. The key components in ClusDet include a cluster proposal sub-network (CPNet), a scale estimation sub-network (ScaleNet), and a dedicated detection network (DetecNet). Given an input image, CPNet produces (object) cluster regions and ScaleNet estimates object scales for these regions. Then, each scale-normalized cluster region and their features are fed into DetecNet for object detection. Compared with previous solutions, ClusDet has several advantages: (1) it greatly reduces the number of blocks for final object detection and hence achieves high running time efficiency, (2) the cluster-based scale estimation is more accurate than previously used single-object based ones, hence effectively improves the detection for small objects, and (3) the final DetecNet is dedicated for clustered regions and implicitly models the prior context information so as to boost detection accuracy. The proposed method is tested on three representative aerial image datasets including VisDrone, UAVDT and DOTA. In all the experiments, ClusDet achieves promising performance in both efficiency and accuracy, in comparison with state-of-the-art detectors.
Region Search in Detection. The strategy of region search is commonly adopted in detection to handle small objects. The approach of @cite_34 proposes to adaptively direct computational resources to sub-regions where objects are sparse and small. The work of @cite_15 introduces a context driven search method to efficiently localize the regions containing a specific class of object. @cite_6 , the authors propose to dynamically explore the search space in proposal-based object detection by learning contextual relations. The method in @cite_31 proposes to leverage reinforcement learning to sequentially select regions for detection at higher resolution scale. In a more specific domain, vehicle detection in wide aerial motion imagery (WAMI), the work of @cite_13 suggests a two-stage spatial-temporal convolutional neural networks to detect vehicles from a sequence of WAMI.
{ "cite_N": [ "@cite_31", "@cite_6", "@cite_15", "@cite_34", "@cite_13" ], "mid": [ "2963897760", "2399953608", "2135440260", "2963790522", "2774291319" ], "abstract": [ "We introduce a generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images. Detection progresses in a coarse-to-fine manner, first on a down-sampled version of the image and then on a sequence of higher resolution regions identified as likely to improve the detection accuracy. Built upon reinforcement learning, our approach consists of a model (R-net) that uses coarse detection results to predict the potential accuracy gain for analyzing a region at a higher resolution and another model (Q-net) that sequentially selects regions to zoom in. Experiments on the Caltech Pedestrians dataset show that our approach reduces the number of processed pixels by over 50 without a drop in detection accuracy. The merits of our approach become more significant on a high resolution test set collected from YFCC100M dataset, where our approach maintains high detection performance while reducing the number of processed pixels by about 70 and the detection time by over 50 .", "We propose a novel general strategy for object detection. Instead of passively evaluating all object detectors at all possible locations in an image, we develop a divide-and-conquer approach by actively and sequentially evaluating contextual cues related to the query based on the scene and previous evaluations — like playing a \"20 Questions\" game — to decide where to search for the object. We formulate the problem as a Markov Decision Process and learn a search policy by reinforcement learning. To demonstrate the efficacy of our generic algorithm, we apply the 20 questions approach in the recent framework of simultaneous object detection and segmentation. Experimental results on the Pascal VOC dataset show that our algorithm reduces about 45.3 of the object proposals and 36 of average evaluation time while achieving better average precision compared to exhaustive search.", "The dominant visual search paradigm for object class detection is sliding windows. Although simple and effective, it is also wasteful, unnatural and rigidly hardwired. We propose strategies to search for objects which intelligently explore the space of windows by making sequential observations at locations decided based on previous observations. Our strategies adapt to the class being searched and to the content of a particular test image, exploiting context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. In addition to being more elegant than sliding windows, we demonstrate experimentally on the PASCAL VOC 2010 dataset that our strategies evaluate two orders of magnitude fewer windows while achieving higher object detection performance.", "State-of-the-art object detection systems rely on an accurate set of region proposals. Several recent methods use a neural network architecture to hypothesize promising object locations. While these approaches are computationally efficient, they rely on fixed image regions as anchors for predictions. In this paper we propose to use a search strategy that adaptively directs computational resources to sub-regions likely to contain objects. Compared to methods based on fixed anchor locations, our approach naturally adapts to cases where object instances are sparse and small. Our approach is comparable in terms of accuracy to the state-of-the-art Faster R-CNN approach while using two orders of magnitude fewer anchors on average. Code is publicly available.", "Object detection in wide area motion imagery (WAMI) has drawn the attention of the computer vision research community for a number of years. WAMI proposes a number of unique challenges including extremely small object sizes, both sparse and densely-packed objects, and extremely large search spaces (large video frames). Nearly all state-of-the-art methods in WAMI object detection report that appearance-based classifiers fail in this challenging data and instead rely almost entirely on motion information in the form of background subtraction or frame-differencing. In this work, we experimentally verify the failure of appearance-based classifiers in WAMI, such as Faster R-CNN and a heatmap-based fully convolutional neural network (CNN), and propose a novel two-stage spatio-temporal CNN which effectively and efficiently combines both appearance and motion information to significantly surpass the state-of-the-art in WAMI object detection. To reduce the large search space, the first stage (ClusterNet) takes in a set of extremely large video frames, combines the motion and appearance information within the convolutional architecture, and proposes regions of objects of interest (ROOBI). These ROOBI can contain from one to clusters of several hundred objects due to the large video frame size and varying object density in WAMI. The second stage (FoveaNet) then estimates the centroid location of all objects in that given ROOBI simultaneously via heatmap estimation. The proposed method exceeds state-of-the-art results on the WPAFB 2009 dataset by 5-16 for moving objects and nearly 50 for stopped objects, as well as being the first proposed method in wide area motion imagery to detect completely stationary objects." ] }
1904.08008
2936000158
Detecting objects in aerial images is challenging for at least two reasons: (1) target objects like pedestrians are very small in terms of pixels, making them hard to be distinguished from surrounding background; and (2) targets are in general very sparsely and nonuniformly distributed, making the detection very inefficient. In this paper we address both issues inspired by the observation that these targets are often clustered. In particular, we propose a Clustered Detection (ClusDet) network that unifies object cluster and detection in an end-to-end framework. The key components in ClusDet include a cluster proposal sub-network (CPNet), a scale estimation sub-network (ScaleNet), and a dedicated detection network (DetecNet). Given an input image, CPNet produces (object) cluster regions and ScaleNet estimates object scales for these regions. Then, each scale-normalized cluster region and their features are fed into DetecNet for object detection. Compared with previous solutions, ClusDet has several advantages: (1) it greatly reduces the number of blocks for final object detection and hence achieves high running time efficiency, (2) the cluster-based scale estimation is more accurate than previously used single-object based ones, hence effectively improves the detection for small objects, and (3) the final DetecNet is dedicated for clustered regions and implicitly models the prior context information so as to boost detection accuracy. The proposed method is tested on three representative aerial image datasets including VisDrone, UAVDT and DOTA. In all the experiments, ClusDet achieves promising performance in both efficiency and accuracy, in comparison with state-of-the-art detectors.
Our Approach. In this paper, we aim at solving two aforementioned challenges for aerial image detection. Our approach is related to but different from the previous region search based detectors ( , @cite_34 @cite_31 ), which partition high-resolution images into small uniforms chips for detection. In contrast, our solution first predicts cluster regions in the images, and then extract these clustered regions for fine detection, leading to significant reduction of the computation cost. Although the method in @cite_13 also performs detection on chips that potentially contain objects, our approach significantly differs from it. @cite_13 , the obtained chips are directly resized to fit the detector for subsequent detection. On the contrary, inspired by the observation in @cite_18 that objects with extreme scales may deteriorate the detection performance, we propose a ScaleNet to alleviate this issue, resulting in improvement in fine detection on each chip.
{ "cite_N": [ "@cite_31", "@cite_34", "@cite_13", "@cite_18" ], "mid": [ "2963897760", "2963790522", "2774291319", "2768489488" ], "abstract": [ "We introduce a generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images. Detection progresses in a coarse-to-fine manner, first on a down-sampled version of the image and then on a sequence of higher resolution regions identified as likely to improve the detection accuracy. Built upon reinforcement learning, our approach consists of a model (R-net) that uses coarse detection results to predict the potential accuracy gain for analyzing a region at a higher resolution and another model (Q-net) that sequentially selects regions to zoom in. Experiments on the Caltech Pedestrians dataset show that our approach reduces the number of processed pixels by over 50 without a drop in detection accuracy. The merits of our approach become more significant on a high resolution test set collected from YFCC100M dataset, where our approach maintains high detection performance while reducing the number of processed pixels by about 70 and the detection time by over 50 .", "State-of-the-art object detection systems rely on an accurate set of region proposals. Several recent methods use a neural network architecture to hypothesize promising object locations. While these approaches are computationally efficient, they rely on fixed image regions as anchors for predictions. In this paper we propose to use a search strategy that adaptively directs computational resources to sub-regions likely to contain objects. Compared to methods based on fixed anchor locations, our approach naturally adapts to cases where object instances are sparse and small. Our approach is comparable in terms of accuracy to the state-of-the-art Faster R-CNN approach while using two orders of magnitude fewer anchors on average. Code is publicly available.", "Object detection in wide area motion imagery (WAMI) has drawn the attention of the computer vision research community for a number of years. WAMI proposes a number of unique challenges including extremely small object sizes, both sparse and densely-packed objects, and extremely large search spaces (large video frames). Nearly all state-of-the-art methods in WAMI object detection report that appearance-based classifiers fail in this challenging data and instead rely almost entirely on motion information in the form of background subtraction or frame-differencing. In this work, we experimentally verify the failure of appearance-based classifiers in WAMI, such as Faster R-CNN and a heatmap-based fully convolutional neural network (CNN), and propose a novel two-stage spatio-temporal CNN which effectively and efficiently combines both appearance and motion information to significantly surpass the state-of-the-art in WAMI object detection. To reduce the large search space, the first stage (ClusterNet) takes in a set of extremely large video frames, combines the motion and appearance information within the convolutional architecture, and proposes regions of objects of interest (ROOBI). These ROOBI can contain from one to clusters of several hundred objects due to the large video frame size and varying object density in WAMI. The second stage (FoveaNet) then estimates the centroid location of all objects in that given ROOBI simultaneously via heatmap estimation. The proposed method exceeds state-of-the-art results on the WPAFB 2009 dataset by 5-16 for moving objects and nearly 50 for stopped objects, as well as being the first proposed method in wide area motion imagery to detect completely stationary objects.", "An analysis of different techniques for recognizing and detecting objects under extreme scale variation is presented. Scale specific and scale invariant design of detectors are compared by training them with different configurations of input data. By evaluating the performance of different network architectures for classifying small objects on ImageNet, we show that CNNs are not robust to changes in scale. Based on this analysis, we propose to train and test detectors on the same scales of an image-pyramid. Since small and large objects are difficult to recognize at smaller and larger scales respectively, we present a novel training scheme called Scale Normalization for Image Pyramids (SNIP) which selectively back-propagates the gradients of object instances of different sizes as a function of the image scale. On the COCO dataset, our single model performance is 45.7 and an ensemble of 3 networks obtains an mAP of 48.3 . We use off-the-shelf ImageNet-1000 pre-trained models and only train with bounding box supervision. Our submission won the Best Student Entry in the COCO 2017 challenge. Code will be made available at http: bit.ly 2yXVg4c." ] }
1904.08042
2937228089
Cross-modal retrieval aims to retrieve relevant data across different modalities (e.g., texts vs. images). The common strategy is to apply element-wise constraints between manually labeled pair-wise items to guide the generators to learn the semantic relationships between the modalities, so that the similar items can be projected close to each other in the common representation subspace. However, such constraints often fail to preserve the semantic structure between unpaired but semantically similar items (e.g. the unpaired items with the same class label are more similar than items with different labels). To address the above problem, we propose a novel cross-modal similarity transferring (CMST) method to learn and preserve the semantic relationships between unpaired items in an unsupervised way. The key idea is to learn the quantitative similarities in single-modal representation subspace, and then transfer them to the common representation subspace to establish the semantic relationships between unpaired items across modalities. Experiments show that our method outperforms the state-of-the-art approaches both in the class-based and pair-based retrieval tasks.
Cross-modal retrieval methods can be roughly divided into @cite_17 and @cite_2 @cite_8 . The proposed CMST model falls into the category of joint representation based methods. It aims to learn a real common representation subspace of multimodal data, where cross-modal data can be directly compared to each other through predefined similarity measurement. Cross-modal retrieval methods like CCA-based methods @cite_5 @cite_13 , LDA-based methods @cite_10 and neural network based methods @cite_16 @cite_17 also fall into this category. On the other hand, the cross-modal hashing methods mainly focus on the retrieval efficiency by mapping the items of different modalities into a common binary Hamming space.
{ "cite_N": [ "@cite_8", "@cite_10", "@cite_2", "@cite_5", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "", "2006031689", "2591669147", "1523385540", "2564325886", "", "2765440071" ], "abstract": [ "", "We present topic-regression multi-modal Latent Dirich-let Allocation (tr-mmLDA), a novel statistical topic model for the task of image and video annotation. At the heart of our new annotation model lies a novel latent variable regression approach to capture correlations between image or video features and annotation texts. Instead of sharing a set of latent topics between the 2 data modalities as in the formulation of correspondence LDA in [2], our approach introduces a regression module to correlate the 2 sets of topics, which captures more general forms of association and allows the number of topics in the 2 data modalities to be different. We demonstrate the power of tr-mmLDA on 2 standard annotation datasets: a 5000-image subset of COREL and a 2687-image LabelMe dataset. The proposed association model shows improved performance over correspondence LDA as measured by caption perplexity.", "Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.", "We introduce Deep Canonical Correlation Analysis (DCCA), a method to learn complex nonlinear transformations of two views of data such that the resulting representations are highly linearly correlated. Parameters of both transformations are jointly learned to maximize the (regularized) total correlation. It can be viewed as a nonlinear extension of the linear method canonical correlation analysis (CCA). It is an alternative to the nonparametric method kernel canonical correlation analysis (KCCA) for learning correlated nonlinear transformations. Unlike KCCA, DCCA does not require an inner product, and has the advantages of a parametric method: training time scales well with data size and the training data need not be referenced when computing the representations of unseen instances. In experiments on two real-world datasets, we find that DCCA learns representations with significantly higher correlation than those learned by CCA and KCCA. We also introduce a novel non-saturating sigmoid function based on the cube root that may be useful more generally in feedforward neural networks.", "In this paper, we propose a new deep coupled metric learning (DCML) method for cross-modal matching, which aims to match samples captured from two different modalities (e.g., texts versus images, visible versus near infrared images). Unlike existing cross-modal matching methods which learn a linear common space to reduce the modality gap, our DCML designs two feedforward neural networks which learn two sets of hierarchical nonlinear transformations (one set for each modality) to nonlinearly map samples from different modalities into a shared latent feature subspace, under which the intraclass variation is minimized and the interclass variation is maximized, and the difference of each data pair captured from two modalities of the same class is minimized, respectively. Experimental results on four different cross-modal matching datasets validate the efficacy of the proposed approach.", "", "Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g., texts vs. images). The core of cross-modal retrieval research is to learn a common subspace where the items of different modalities can be directly compared to each other. In this paper, we present a novel Adversarial Cross-Modal Retrieval (ACMR) method, which seeks an effective common subspace based on adversarial learning. Adversarial learning is implemented as an interplay between two processes. The first process, a feature projector, tries to generate a modality-invariant representation in the common subspace and to confuse the other process, modality classifier, which tries to discriminate between different modalities based on the generated representation. We further impose triplet constraints on the feature projector in order to minimize the gap among the representations of all items from different modalities with same semantic labels, while maximizing the distances among semantically different images and texts. Through the joint exploitation of the above, the underlying cross-modal semantic structure of multimedia data is better preserved when this data is projected into the common subspace. Comprehensive experimental results on four widely used benchmark datasets show that the proposed ACMR method is superior in learning effective subspace representation and that it significantly outperforms the state-of-the-art cross-modal retrieval methods." ] }
1904.08042
2937228089
Cross-modal retrieval aims to retrieve relevant data across different modalities (e.g., texts vs. images). The common strategy is to apply element-wise constraints between manually labeled pair-wise items to guide the generators to learn the semantic relationships between the modalities, so that the similar items can be projected close to each other in the common representation subspace. However, such constraints often fail to preserve the semantic structure between unpaired but semantically similar items (e.g. the unpaired items with the same class label are more similar than items with different labels). To address the above problem, we propose a novel cross-modal similarity transferring (CMST) method to learn and preserve the semantic relationships between unpaired items in an unsupervised way. The key idea is to learn the quantitative similarities in single-modal representation subspace, and then transfer them to the common representation subspace to establish the semantic relationships between unpaired items across modalities. Experiments show that our method outperforms the state-of-the-art approaches both in the class-based and pair-based retrieval tasks.
More recently, methods to explore the semantic relationships between unpaired items have been proposed. The ACMR @cite_17 method proposes the triplet loss and the modality classifier for preserving the modality level semantic structures. The MHTN @cite_9 is proposed to minimize the maximum mean discrepancy between modalities, which preserves more flexibility for the generator to project vectors into a new space. The difference between CMST and the previous work is that CMST can learn the item-level semantic relationships between unpaired items in an unsupervised way.
{ "cite_N": [ "@cite_9", "@cite_17" ], "mid": [ "2747647654", "2765440071" ], "abstract": [ "Cross-modal retrieval has drawn wide interest for retrieval across different modalities of data. However, existing methods based on DNN face the challenge of insufficient cross-modal training data, which limits the training effectiveness and easily leads to overfitting. Transfer learning is for relieving the problem of insufficient training data, but it mainly focuses on knowledge transfer only from large-scale datasets as single-modal source domain to single-modal target domain. Such large-scale single-modal datasets also contain rich modal-independent semantic knowledge that can be shared across different modalities. Besides, large-scale cross-modal datasets are very labor-consuming to collect and label, so it is significant to fully exploit the knowledge in single-modal datasets for boosting cross-modal retrieval. This paper proposes modal-adversarial hybrid transfer network (MHTN), which to the best of our knowledge is the first work to realize knowledge transfer from single-modal source domain to cross-modal target domain, and learn cross-modal common representation. It is an end-to-end architecture with two subnetworks: (1) Modal-sharing knowledge transfer subnetwork is proposed to jointly transfer knowledge from a large-scale single-modal dataset in source domain to all modalities in target domain with a star network structure, which distills modal-independent supplementary knowledge for promoting cross-modal common representation learning. (2) Modal-adversarial semantic learning subnetwork is proposed to construct an adversarial training mechanism between common representation generator and modality discriminator, making the common representation discriminative for semantics but indiscriminative for modalities to enhance cross-modal semantic consistency during transfer process. Comprehensive experiments on 4 widely-used datasets show its effectiveness and generality.", "Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g., texts vs. images). The core of cross-modal retrieval research is to learn a common subspace where the items of different modalities can be directly compared to each other. In this paper, we present a novel Adversarial Cross-Modal Retrieval (ACMR) method, which seeks an effective common subspace based on adversarial learning. Adversarial learning is implemented as an interplay between two processes. The first process, a feature projector, tries to generate a modality-invariant representation in the common subspace and to confuse the other process, modality classifier, which tries to discriminate between different modalities based on the generated representation. We further impose triplet constraints on the feature projector in order to minimize the gap among the representations of all items from different modalities with same semantic labels, while maximizing the distances among semantically different images and texts. Through the joint exploitation of the above, the underlying cross-modal semantic structure of multimedia data is better preserved when this data is projected into the common subspace. Comprehensive experimental results on four widely used benchmark datasets show that the proposed ACMR method is superior in learning effective subspace representation and that it significantly outperforms the state-of-the-art cross-modal retrieval methods." ] }
1904.07839
2937925769
In this paper we revisit the problem of automatically identifying hate speech in posts from social media. We approach the task using a system based on minimalistic compositional Recurrent Neural Networks (RNN). We tested our approach on the SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter (HatEval) shared task dataset. The dataset made available by the HatEval organizers contained English and Spanish posts retrieved from Twitter annotated with respect to the presence of hateful content and its target. In this paper we present the results obtained by our system in comparison to the other entries in the shared task. Our system achieved competitive performance ranking 7th in sub-task A out of 62 systems in the English track.
As evidenced in the introduction of this paper, there have been a number of studies on automatic hate speech identification published in the last few years. One of the most influential recent papers on hate speech identification is the one by . In this paper, the authors presented the Hate Speech Detection dataset which contains posts retrieved from social media labeled with three categories: OK (posts not containing profanity or hate speech), Offensive (posts containing swear words and general profanity), and Hate (posts containing hate speech). It has been noted in , and in other works @cite_5 , that training models to discriminate between general profanity and hate speech is far from trivial due to, for example, the fact that a significant percentage of hate speech posts contain swear words. It has been argued that annotating texts with respect to the presence of hate speech has an intrinsic degree of subjectivity @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2962932155" ], "abstract": [ "AbstractIn this study, we approach the problem of distinguishing general profanity from hate speech in social media, something which has not been widely considered. Using a new dataset annotated specifically for this task, we employ supervised classification along with a set of features that includes -grams, skip-grams and clustering-based word representations. We apply approaches based on single classifiers as well as more advanced ensemble classifiers and stacked generalisation, achieving the best result of accuracy for this 3-class classification task. Analysis of the results reveals that discriminating hate speech and profanity is not a simple task, which may require features that capture a deeper understanding of the text not always possible with surface -grams. The variability of gold labels in the annotated data, due to differences in the subjective adjudications of the annotators, is also an issue. Other directions for future work are discussed." ] }
1904.08003
2937094500
This paper presents a formal definition and explicit representation of robot motion risk. Currently, robot motion risk has not been formally defined, but has already been used in motion and path planning. Risk is either implicitly represented as model uncertainty using probabilistic approaches, where the definition of risk is somewhat avoided, or explicitly modeled as a simple function of states, without a formal definition. In this work, we provide formal reasoning behind what risk is for robot motion and propose a formal definition of risk in terms of a sequence of motion, namely path. Mathematical approaches to represent motion risk are also presented, which is in accordance with our risk definition and properties. The definition and representation of risk provide a meaningful way to evaluate or construct robot motion or path plans. The understanding of risk is even of greater interest for the search and rescue community: the deconstructed environments cast extra risk onto the robot, since they are working under extreme conditions. A proper risk representation has the potential to reduce robot failure in the field.
Although researchers have investigated risk-aware planning from a variety of directions, including Partially Observable Markov Decision Process (POMDP) with negative reward as risk penalty @cite_5 , constrained POMDP @cite_8 , or risk allocation @cite_4 , the definition of risk was always taken for granted and only based on the ad hoc applications.
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_8" ], "mid": [ "1823979212", "2127788848", "2110790702" ], "abstract": [ "Recent advances in Autonomous Underwater Vehicle (AUV) technology have facilitated the collection of oceanographic data at a fraction of the cost of ship-based sampling methods. Unlike oceanographic data collection in the deep ocean, operation of AUVs in coastal regions exposes them to the risk of collision with ships and land. Such concerns are particularly prominent for slow-moving AUVs since ocean current magnitudes are often strong enough to alter the planned path significantly. Prior work using predictive ocean currents relies upon deterministic outcomes, which do not account for the uncertainty in the ocean current predictions themselves. To improve the safety and reliability of AUV operation in coastal regions, we introduce two stochastic planners: (a) a Minimum Expected Risk planner and (b) a risk-aware Markov Decision Process, both of which have the ability to utilize ocean current predictions probabilistically. We report results from extensive simulation studies in realistic ocean current fields obtained from widely used regional ocean models. Our simulations show that the proposed planners have lower collision risk than state-of-the-art methods. We present additional results from field experiments where ocean current predictions were used to plan the paths of two Slocum gliders. Field trials indicate the practical usefulness of our techniques over long-term deployments, showing them to be ideal for AUV operations.", "This paper proposes a novel two-stage optimization method for robust model predictive control (RMPC) with Gaussian disturbance and state estimation error. Since the disturbance is unbounded, it is impossible to achieve zero probability of violating constraints. Our goal is to optimize the expected value of an objective function while limiting the probability of violating any constraints over the planning horizon (joint chance constraint). Prior arts include ellipsoidal relaxation approach [1] and particle control [2], but the former yields very conservative result and the latter is computationally intensive. Our approach divide the optimization problem into two stages; the upper-stage that optimizes risk allocation, and the lower-stage that optimizes control sequence with tightened constraints. The lower-stage is a regular convex optimization, such as linear programming or quadratic programming. The upper-stage is also convex, but the objective function is not always differentiable. We developed a fast descent algorithm for the upper-stage called iterative risk allocation (IRA), which yield much smaller suboptimality than ellipsoidal relaxation method while achieving a substantial speedup compared to and particle control.", "This work seeks to address the problem of planning in the presence of uncertainty and constraints. Such problems arise in many situations, including the basis of this work, which involves planning for a team of first responders (both humans and robots) operating in an urban environment. The problem is framed as a Partially-Observable Markov Decision Process (POMDP) with constraints, and it is shown that even in a relatively simple planning problem, modeling constraints as large penalties does not lead to good solutions. The main contribution of the work is a new online algorithm that explicitly ensures constraint feasibility while remaining computationally tractable. Its performance is demonstrated on an example problem and it is demonstrated that our online algorithm generates policies comparable to an offline constrained POMDP algorithm." ] }
1904.08003
2937094500
This paper presents a formal definition and explicit representation of robot motion risk. Currently, robot motion risk has not been formally defined, but has already been used in motion and path planning. Risk is either implicitly represented as model uncertainty using probabilistic approaches, where the definition of risk is somewhat avoided, or explicitly modeled as a simple function of states, without a formal definition. In this work, we provide formal reasoning behind what risk is for robot motion and propose a formal definition of risk in terms of a sequence of motion, namely path. Mathematical approaches to represent motion risk are also presented, which is in accordance with our risk definition and properties. The definition and representation of risk provide a meaningful way to evaluate or construct robot motion or path plans. The understanding of risk is even of greater interest for the search and rescue community: the deconstructed environments cast extra risk onto the robot, since they are working under extreme conditions. A proper risk representation has the potential to reduce robot failure in the field.
Researchers have looked at explicit representation of risk as well, although a formal definition of risk is still lacking. @cite_2 proposed risk as an accumulative parameterized function based on distance to threats. A similar approach was taken by @cite_6 , where a risk map was generated based on ground orography and the risk of each location could be looked up from the map. Those works based risk only on the distance to some ad hoc hazards. @cite_11 further represented risk of navigating in construction sites in terms of not only distance to hazard zones but also visibility value as a function of state. All those approaches treated risk as a function of state (or location) on the path, and the total motion risk of executing that path was simply the summation of all states' risk (Fig. ). This only focused on one very specific form of risk, neglecting other more general forms.
{ "cite_N": [ "@cite_11", "@cite_6", "@cite_2" ], "mid": [ "2012109553", "2108400488", "2043480919" ], "abstract": [ "Movement of materials, plant and site operative from one place to another on construction sites and construction workplaces are of paramount importance to site planners as savings in travel distance can reduce cost and increase productivity. In addition, risks on construction sites can be reduced if the use of vehicles and mobile plant is properly managed by setting out paths avoiding high risks areas. The work reported in this paper presents a framework for supporting path planning analysis of construction sites based on multi-objective evaluation of transport cost, safety, and visibility. This paper investigates the use of fuzzy-based multi-objective optimisation approach in making a more informed strategic decisions regarding the movement path of people and vehicles on construction sites, and detailed decisions regarding travel distance and operational paths on workplaces, enabling site planners to examine paths scenarios that are subjected to a high degree of uncertainty and subjectivity.", "In the last years, the Flight Mechanics Research Group of Politecnico di Torino started a wide research activity, focused on exploration and implementation of path planning algorithms for commercial autopilots, typically adopted on unmanned vehicles. Different path planning approaches were implemented in a Matlab Simulink based tool, generating waypoints sequences with four methods: geometric predefined trajectories, manual waypoints definition, automatic waypoints distribution (i.e. optimizing camera payload capabilities) and, finally, a comprehensive A*-based approach. The tool was also integrated with functions managing the maps used for planning. In this paper, two approaches to path planning in presence of orographic obstacles are detailed. The first algorithm is subdivided in three phases: the generation of a risk map associated with the ground orography, the transformation of the map in a digraph analyzed with the A* algorithm (to obtain the path with minimum risk minimum distance) and finally the smoothing phase, to obtain a flyable waypoint sequence, realized with the Dubins curves. The structure of this method was defined and implemented, but its optimization is still in progress. The second algorithm is based on the same risk map, but optimizing polynomial curves with a genetic algorithm. This method produces a flyable waypoints sequence, minimizing a cost function reflecting path length and collision risk. An extension of this path solver, including aircraft performance constraints, was also considered. This method was tested on sample domains and its computational cost has still to be evaluated before the implementation in the tool.", "In developing autonomy of uninhabited air vehicles (UAV), an important part is to design automatically a \"safe\" and \"optimal\" flight path (a series of waypoints) for a UAV to fly from one point to another in the presence of threats. This paper describes five different approaches to generate such a path. Furthermore, comparisons are drawn with regard to convergence, information requested, computational demand, etc. of each approach." ] }
1904.08050
2935694433
Dropout is commonly used to help reduce overfitting in deep neural networks. Sparsity is a potentially important property of neural networks, but is not explicitly controlled by Dropout-based regularization. In this work, we propose Sparseout a simple and efficient variant of Dropout that can be used to control the sparsity of the activations in a neural network. We theoretically prove that Sparseout is equivalent to an (L_q ) penalty on the features of a generalized linear model and that Dropout is a special case of Sparseout for neural networks. We empirically demonstrate that Sparseout is computationally inexpensive and is able to control the desired level of sparsity in the activations. We evaluated Sparseout on image classification and language modelling tasks to see the effect of sparsity on these tasks. We found that sparsity of the activations is favorable for language modelling performance while image classification benefits from denser activations. Sparseout provides a way to investigate sparsity in state-of-the-art deep learning models. Source code for Sparseout could be found at https: github.com najeebkhan sparseout.
Due to the over-parameterization of deep neural networks, they suffer from large generalization error, specifically, when the dataset size is relatively small. This phenomenon is known as over-fitting. Generalization error is upper bounded by the model complexity @cite_32 thus overfitting could be reduced by controlling the complexity of the model.
{ "cite_N": [ "@cite_32" ], "mid": [ "607505555" ], "abstract": [ "Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics, and engineering." ] }
1904.08050
2935694433
Dropout is commonly used to help reduce overfitting in deep neural networks. Sparsity is a potentially important property of neural networks, but is not explicitly controlled by Dropout-based regularization. In this work, we propose Sparseout a simple and efficient variant of Dropout that can be used to control the sparsity of the activations in a neural network. We theoretically prove that Sparseout is equivalent to an (L_q ) penalty on the features of a generalized linear model and that Dropout is a special case of Sparseout for neural networks. We empirically demonstrate that Sparseout is computationally inexpensive and is able to control the desired level of sparsity in the activations. We evaluated Sparseout on image classification and language modelling tasks to see the effect of sparsity on these tasks. We found that sparsity of the activations is favorable for language modelling performance while image classification benefits from denser activations. Sparseout provides a way to investigate sparsity in state-of-the-art deep learning models. Source code for Sparseout could be found at https: github.com najeebkhan sparseout.
One way to control the complexity of a model is to impose constraints on the parameters of the model such as the weights in the neural networks. Such model regularization methods can be classified into deterministic and stochastic methods. Deterministic methods either remove redundant weights @cite_6 or penalize large magnitude weights. Weight penalties are imposed by adding a regularization term to the loss function consisting of a norm of the weight matrix @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_6" ], "mid": [ "2962857907", "2963674932" ], "abstract": [ "We investigate the capacity, convexity and characterization of a general family of norm-constrained feed-forward networks.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy." ] }
1904.08050
2935694433
Dropout is commonly used to help reduce overfitting in deep neural networks. Sparsity is a potentially important property of neural networks, but is not explicitly controlled by Dropout-based regularization. In this work, we propose Sparseout a simple and efficient variant of Dropout that can be used to control the sparsity of the activations in a neural network. We theoretically prove that Sparseout is equivalent to an (L_q ) penalty on the features of a generalized linear model and that Dropout is a special case of Sparseout for neural networks. We empirically demonstrate that Sparseout is computationally inexpensive and is able to control the desired level of sparsity in the activations. We evaluated Sparseout on image classification and language modelling tasks to see the effect of sparsity on these tasks. We found that sparsity of the activations is favorable for language modelling performance while image classification benefits from denser activations. Sparseout provides a way to investigate sparsity in state-of-the-art deep learning models. Source code for Sparseout could be found at https: github.com najeebkhan sparseout.
Stochastic methods randomly perturb the weights so as to achieve minimal co-dependency between neurons @cite_34 @cite_14 @cite_29 as well as regularizing the model at the same time. Stochastic regularization has become the standard practice in training deep learning models and have outperformed deterministic regularization methods on many tasks. Stochastic regularization techniques have a Bayesian model averaging interpretation as well as they posses an equivalence to weight penalties for linear models. In terms of Bayesian estimation, weight penalties are equivalent to imposing a prior distribution on the model weights.
{ "cite_N": [ "@cite_14", "@cite_29", "@cite_34" ], "mid": [ "2479750863", "2798393061", "2095705004" ], "abstract": [ "Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. The invention of effective training techniques largely contributes to this success. The so-called \"Dropout\" training scheme is one of the most powerful tool to reduce over-fitting. From the statistic point of view, Dropout works by implicitly imposing an L2 regularizer on the weights. In this paper, we present a new training scheme: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, our method randomly chooses to enhance or inverse the contributions of each unit to the next layer. We show that our scheme leads to a combination of L1 regularization and L2 regularization imposed on the weights, which has been proved effective by the Elastic Net models in practice. We have empirically evaluated the Shakeout scheme and demonstrated that sparse network weights are obtained via Shakeout training. Our classification experiments on real-life image datasets MNIST and CIFAR-10 show that Shakeout deals with over-fitting effectively.", "A major challenge in training deep neural networks is overfitting, i.e. inferior performance on unseen test examples compared to performance on training examples. To reduce overfitting, stochastic regularization methods have shown superior performance compared to deterministic weight penalties on a number of image recognition tasks. Stochastic methods such as Dropout and Shakeout, in expectation, are equivalent to imposing a ridge and elastic-net penalty on the model parameters, respectively. However, the choice of the norm of weight penalty is problem dependent and is not restricted to @math . Therefore, in this paper we propose the Bridgeout stochastic regularization technique and prove that it is equivalent to an @math penalty on the weights, where the norm @math can be learned as a hyperparameter from data. Experimental results show that Bridgeout results in sparse model weights, improved gradients and superior classification performance compared to Dropout and Shakeout on synthetic and real datasets.", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets." ] }
1904.08050
2935694433
Dropout is commonly used to help reduce overfitting in deep neural networks. Sparsity is a potentially important property of neural networks, but is not explicitly controlled by Dropout-based regularization. In this work, we propose Sparseout a simple and efficient variant of Dropout that can be used to control the sparsity of the activations in a neural network. We theoretically prove that Sparseout is equivalent to an (L_q ) penalty on the features of a generalized linear model and that Dropout is a special case of Sparseout for neural networks. We empirically demonstrate that Sparseout is computationally inexpensive and is able to control the desired level of sparsity in the activations. We evaluated Sparseout on image classification and language modelling tasks to see the effect of sparsity on these tasks. We found that sparsity of the activations is favorable for language modelling performance while image classification benefits from denser activations. Sparseout provides a way to investigate sparsity in state-of-the-art deep learning models. Source code for Sparseout could be found at https: github.com najeebkhan sparseout.
Beside the weight penalty interpretation, a reason for the effectiveness of stochastic regularization methods could be the prevention of correlated activations. It has been shown that high correlation between activations of the neurons results in overfitting. DeCov @cite_1 , reduces overfitting by adding a penalty term to the cost function consisting of the co-variances among the activations of the neurons over a mini-batch.
{ "cite_N": [ "@cite_1" ], "mid": [ "2181083374" ], "abstract": [ "One major challenge in training Deep Neural Networks is preventing overfitting. Many techniques such as data augmentation and novel regularizers such as Dropout have been proposed to prevent overfitting without requiring a massive amount of training data. In this work, we propose a new regularizer called DeCov which leads to significantly reduced overfitting (as indicated by the difference between train and val performance), and better generalization. Our regularizer encourages diverse or non-redundant representations in Deep Neural Networks by minimizing the cross-covariance of hidden activations. This simple intuition has been explored in a number of past works but surprisingly has never been applied as a regularizer in supervised learning. Experiments across a range of datasets and network architectures show that this loss always reduces overfitting while almost always maintaining or increasing generalization performance and often improving performance over Dropout." ] }
1904.08050
2935694433
Dropout is commonly used to help reduce overfitting in deep neural networks. Sparsity is a potentially important property of neural networks, but is not explicitly controlled by Dropout-based regularization. In this work, we propose Sparseout a simple and efficient variant of Dropout that can be used to control the sparsity of the activations in a neural network. We theoretically prove that Sparseout is equivalent to an (L_q ) penalty on the features of a generalized linear model and that Dropout is a special case of Sparseout for neural networks. We empirically demonstrate that Sparseout is computationally inexpensive and is able to control the desired level of sparsity in the activations. We evaluated Sparseout on image classification and language modelling tasks to see the effect of sparsity on these tasks. We found that sparsity of the activations is favorable for language modelling performance while image classification benefits from denser activations. Sparseout provides a way to investigate sparsity in state-of-the-art deep learning models. Source code for Sparseout could be found at https: github.com najeebkhan sparseout.
Another approach to control model complexity, inspired by sparse coding @cite_20 , is to impose a constraint on the activations of the neural network @cite_17 . To encourage sparsity of the activations, an @math norm of the activations is added to the cost function @cite_15 . Another form of penalty is to add the KL-divergence of the expected activations and a preset target sparsity value @math @cite_12 . have used a clustering approach to obtain sparse representation by encouraging activations to form clusters @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_15", "@cite_20", "@cite_12", "@cite_17" ], "mid": [ "2560977758", "2133257461", "2105464873", "44815768", "2189183509" ], "abstract": [ "In this paper we aim at facilitating generalization for deep networks while supporting interpretability of the learned representations. Towards this goal, we propose a clustering based regularization that encourages parsimonious representations. Our k-means style objective is easy to optimize and flexible supporting various forms of clustering, including sample and spatial clustering as well as co-clustering. We demonstrate the effectiveness of our approach on the tasks of unsupervised learning, classification, fine grained categorization and zero-shot learning.", "Motivated in part by the hierarchical organization of the cortex, a number of algorithms have recently been proposed that try to learn hierarchical, or \"deep,\" structure from unlabeled data. While several authors have formally or informally compared their algorithms to computations performed in visual area V1 (and the cochlea), little attempt has been made thus far to evaluate these algorithms in terms of their fidelity for mimicking computations at deeper levels in the cortical hierarchy. This paper presents an unsupervised learning model that faithfully mimics certain properties of visual area V2. Specifically, we develop a sparse variant of the deep belief networks of (2006). We learn two layers of nodes in the network, and demonstrate that the first layer, similar to prior work on sparse coding and ICA, results in localized, oriented, edge filters, similar to the Gabor functions known to model V1 cell receptive fields. Further, the second layer in our model encodes correlations of the first layer responses in the data. Specifically, it picks up both colinear (\"contour\") features as well as corners and junctions. More interestingly, in a quantitative comparison, the encoding of these more complex \"corner\" features matches well with the results from the Ito & Komatsu's study of biological V2 responses. This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features.", "The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and ban@ass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcomplete--i.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are non-orthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the input-output function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in response to naturalistic stimuli. © 1997 Elsevier Science Ltd", "Restricted Boltzmann machines (RBMs) have been used as generative models of many different types of data. RBMs are usually trained using the contrastive divergence learning procedure. This requires a certain amount of practical experience to decide how to set the values of numerical meta-parameters. Over the last few years, the machine learning group at the University of Toronto has acquired considerable expertise at training RBMs and this guide is an attempt to share this expertise with other machine learning researchers.", "Sparseness is a useful regularizer for learning in a wide range of applications, in particular in neural networks. This paper proposes a model targeted at classification tasks, where sparse activity and sparse connectivity are used to enhance classification capabilities. The tool for achieving this is a sparseness-enforcing projection operator which finds the closest vector with a pre-defined sparseness for any given vector. In the theoretical part of this paper, a comprehensive theory for such a projection is developed. In conclusion, it is shown that the projection is differentiable almost everywhere and can thus be implemented as a smooth neuronal transfer function. The entire model can hence be tuned end-to-end using gradient-based methods. Experiments on the MNIST database of handwritten digits show that classification performance can be boosted by sparse activity or sparse connectivity. With a combination of both, performance can be significantly better compared to classical non-sparse approaches." ] }
1904.08050
2935694433
Dropout is commonly used to help reduce overfitting in deep neural networks. Sparsity is a potentially important property of neural networks, but is not explicitly controlled by Dropout-based regularization. In this work, we propose Sparseout a simple and efficient variant of Dropout that can be used to control the sparsity of the activations in a neural network. We theoretically prove that Sparseout is equivalent to an (L_q ) penalty on the features of a generalized linear model and that Dropout is a special case of Sparseout for neural networks. We empirically demonstrate that Sparseout is computationally inexpensive and is able to control the desired level of sparsity in the activations. We evaluated Sparseout on image classification and language modelling tasks to see the effect of sparsity on these tasks. We found that sparsity of the activations is favorable for language modelling performance while image classification benefits from denser activations. Sparseout provides a way to investigate sparsity in state-of-the-art deep learning models. Source code for Sparseout could be found at https: github.com najeebkhan sparseout.
Another related technique that normalizes activations in the network so as to have zero mean and unit variance is Batchnorm @cite_35 . Although, the primary purpose of Batchnorm is accelerating training optimization of the neural network rather than regularization, Batchnorm has reduced the need for stochastic regularization in certain domains. The above mentioned sparsity-inducing methods are deterministic and thus may result in correlated activations. In this paper we propose Sparseout that implicitly imposes an @math penalty on the activations thus allowing us to choose the level of sparsity in the activations as well as the stochasticity preventing correlated neural activities. Sparseout is different than Bridgeout @cite_29 in that it is applied to activations rather than the weights. Therefore, Sparseout is orders of magnitude faster and practical than Bridgeout. We believe that Sparseout is the first theoretically-motivated technique that is capable of simultaneously controlling sparsity in activations and reducing correlations between them, besides being equivalent to Dropout for @math .
{ "cite_N": [ "@cite_35", "@cite_29" ], "mid": [ "1836465849", "2798393061" ], "abstract": [ "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82 top-5 test error, exceeding the accuracy of human raters.", "A major challenge in training deep neural networks is overfitting, i.e. inferior performance on unseen test examples compared to performance on training examples. To reduce overfitting, stochastic regularization methods have shown superior performance compared to deterministic weight penalties on a number of image recognition tasks. Stochastic methods such as Dropout and Shakeout, in expectation, are equivalent to imposing a ridge and elastic-net penalty on the model parameters, respectively. However, the choice of the norm of weight penalty is problem dependent and is not restricted to @math . Therefore, in this paper we propose the Bridgeout stochastic regularization technique and prove that it is equivalent to an @math penalty on the weights, where the norm @math can be learned as a hyperparameter from data. Experimental results show that Bridgeout results in sparse model weights, improved gradients and superior classification performance compared to Dropout and Shakeout on synthetic and real datasets." ] }
1904.07976
2938238363
Intricating cardiac complexities are the primary factor associated with healthcare costs and the highest cause of death rate in the world. However, preventive measures like the early detection of cardiac anomalies can prevent severe cardiovascular arrests of varying complexities and can impose a substantial impact on healthcare cost. Encountering such scenarios usually the electrocardiogram (ECG or EKG) is the first diagnostic choice of a medical practitioner or clinical staff to measure the electrical and muscular fitness of an individual heart. This paper presents a system which is capable of reading the recorded ECG and predict the cardiac anomalies without the intervention of a human expert. The paper purpose an algorithm which read and perform analysis on electrocardiogram datasets. The proposed architecture uses the Discrete Wavelet Transform (DWT) at first place to perform preprocessing of ECG data followed by undecimated Wavelet transform (UWT) to extract nine relevant features which are of high interest to a cardiologist. The probabilistic mode named Bayesian Network Classifier is trained using the extracted nine parameters on UCL arrhythmia dataset. The proposed system classifies a recorded heartbeat into four classes using Bayesian Network classifier and Tukey's box analysis. The four classes for the prediction of a heartbeat are (a) Normal Beat, (b) Premature Ventricular Contraction (PVC) (c) Premature Atrial Contraction (PAC) and (d) Myocardial Infarction. The results of experimental setup depict that the proposed system has achieved an average accuracy of 96.6 for PAC 92.8 for MI and 87 for PVC, with an average error rate of 3.3 for PAC, 6 for MI and 12.5 for PVC on real electrocardiogram datasets including Physionet and European ST-T Database (EDB).
Numerous vendors and nano-tech industries have introduced different varieties of WBANs and ECG monitoring systems since the last decade. These ECG systems are WBANs are widely in healthcare application for the collection of patient data and transmit to a relevant healthcare service provider. These devices are advantageous for medical practitioners to collect and monitor their patient's health remotely. After a thorough investigation it is observed that the use of these devices is prevailing in healthcare, but still, many improvements can be incorporated. These ECG devices are not capable of processing the ECG signals nor having a mechanism to detect and notify the CVD anomaly. Moreover, another limitation not capable for the provision of Heart Rate Variability (HRV) analysis. Those mentioned above are the main limitations existing in the healthcare monitoring systems for instance, LifeMonitor @cite_15 , MyHeart @cite_11 , CodeBlue @cite_12 , LiveNet @cite_16 , MEDiSN @cite_2 , AliveCor @cite_28 , PhysioMem @cite_8 ,etc.
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_2", "@cite_15", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "", "2105947222", "1975347960", "", "205846877", "1802419104", "1920361317" ], "abstract": [ "", "Ubiquitous Wireless ECG Recording. The use of smart phones has increased dramatically and there are nearly a billion users on 3G and 4G networks worldwide. Nearly 60 of the U.S. population uses smart phones to access the internet, and smart phone sales now surpass those of desktop and laptop computers. The speed of wireless communication technology on 3G and 4G networks and the widespread adoption and use of iOS equipped smart phones (Apple Inc., Cupertino, CA, USA) provide infrastructure for the transmission of wireless biomedical data, including ECG data. These technologies provide an unprecedented opportunity for physicians to continually access data that can be used to detect issues before symptoms occur or to have definitive data when symptoms are present. The technology also greatly empowers and enables the possibility for unprecedented patient participation in their own medical education and health status as well as that of their social network. As patient advocates, physicians and particularly cardiac electrophysiologists should embrace the future and promise of wireless ECG recording, a technology solution that can truly scale across the global population. (J Cardiovasc Electrophysiol, Vol. 24, pp. 480-483, April 2013)", "Staff shortages and an increasingly aging population are straining the ability of emergency departments to provide high quality care. At the same time, there is a growing concern about hospitals' ability to provide effective care during disaster events. For these reasons, tools that automate patient monitoring have the potential to greatly improve efficiency and quality of health care. Towards this goal, we have developed MEDiSN, a wireless sensor network for monitoring patients' physiological data in hospitals and during disaster events. MEDiSN comprises Physiological Monitors (PMs), which are custom-built, patient-worn motes that sample, encrypt, and sign physiological data and Relay Points (RPs) that self-organize into a multi-hop wireless backbone for carrying physiological data. Moreover, MEDiSN includes a back-end server that persistently stores medical data and presents them to authenticated GUI clients. The combination of MEDiSN's two-tier architecture and optimized rate control protocols allows it to address the compound challenge of reliably delivering large volumes of data while meeting the application's QoS requirements. Results from extensive simulations, testbed experiments, and multiple pilot hospital deployments show that MEDiSN can scale from tens to at least five hundred PMs, effectively protect application packets from congestive and corruptive losses, and deliver medically actionable data.", "", "Incorporating new healthcare technologies for proactive health and elder care will become a major priority over the next decade, as medical care systems world-wide become strained by the aging boomer population. We present MIThril LiveNet, a flexible distributed mobile platform that can be deployed for a variety of proactive healthcare applications. The LiveNet system allows people to receive real-time feedback from their continuously monitored and analyzed health state, as well as communicate health information with care-givers and other members of an individual’s social network for support and interaction.", "Sensor devices integrating embedded processors, low-power, lowbandwidth radios, and a modest amount of storage have the potential to enhance emergency medical care. Wearable vital sign sensors can track patient status and location, while simultaneously operating as active tags. We introduce CodeBlue, a wireless infrastructure intended for deployment in emergency medical care, integrating low-power, wireless vital sign sensors, PDAs, and PC-class systems. CodeBlue will enhance first responders’ ability to assess patients on scene, ensure seamless transfer of data among caregivers, and facilitate efficient allocation of hospital resources. Intended to scale to very dense networks with thousands of devices and extremely volatile network conditions, this infrastructure will support reliable, ad hoc data delivery, a flexible naming and discovery scheme, and a decentralized security model. This paper introduces our architecture and highlights research challenges being addressed by the CodeBlue", "Smart clothes increase the efficiency of long-term non-invasive monitoring systems by facilitating the placement of sensors and increasing the number of measurement locations. Since the sensors are either garment-integrated or embedded in an unobtrusive way in the garment, the impact on the subject's comfort is minimized. However, the main challenge of smart clothing lies in the enhancement of signal quality and the management of the huge data volume resulting from the variable contact with the skin, movement artifacts, non-accurate location of sensors and the large number of acquired signals. This paper exposes the strategies and solutions adopted in the European 1ST project MyHeart to address these problems, from the definition of the body sensor network to the description of two embedded signal processing techniques performing on-body ECG enhancement and motion activity classification." ] }
1904.07976
2938238363
Intricating cardiac complexities are the primary factor associated with healthcare costs and the highest cause of death rate in the world. However, preventive measures like the early detection of cardiac anomalies can prevent severe cardiovascular arrests of varying complexities and can impose a substantial impact on healthcare cost. Encountering such scenarios usually the electrocardiogram (ECG or EKG) is the first diagnostic choice of a medical practitioner or clinical staff to measure the electrical and muscular fitness of an individual heart. This paper presents a system which is capable of reading the recorded ECG and predict the cardiac anomalies without the intervention of a human expert. The paper purpose an algorithm which read and perform analysis on electrocardiogram datasets. The proposed architecture uses the Discrete Wavelet Transform (DWT) at first place to perform preprocessing of ECG data followed by undecimated Wavelet transform (UWT) to extract nine relevant features which are of high interest to a cardiologist. The probabilistic mode named Bayesian Network Classifier is trained using the extracted nine parameters on UCL arrhythmia dataset. The proposed system classifies a recorded heartbeat into four classes using Bayesian Network classifier and Tukey's box analysis. The four classes for the prediction of a heartbeat are (a) Normal Beat, (b) Premature Ventricular Contraction (PVC) (c) Premature Atrial Contraction (PAC) and (d) Myocardial Infarction. The results of experimental setup depict that the proposed system has achieved an average accuracy of 96.6 for PAC 92.8 for MI and 87 for PVC, with an average error rate of 3.3 for PAC, 6 for MI and 12.5 for PVC on real electrocardiogram datasets including Physionet and European ST-T Database (EDB).
Lately, a very few approaches have been coined to discourse the gaps and limitations of automatic detection of anomalies and generating alarms for irregular rhythms in respect of WBANs, and real-time ECG recording and monitoring remotely. HeartSaver system is such a kind of system projected in @cite_21 with similar intentions. The system is capable of analyzing the ECG in real-time and can detect cardiac pathologies which are atrioventricular block, atrial fibrillation (A-fib) and myocardial infarction. The author in @cite_27 introduced a mobile application which uses the mobile camera for heart attack detection by placing the index figure on camera. The proposed approach works only with Heart Rate by calculating the blood peaks. The purposed approach is unable to detect CVD because CVD detection involves complex features representation and analysis of the ECG signal.
{ "cite_N": [ "@cite_27", "@cite_21" ], "mid": [ "2184308120", "2004829327" ], "abstract": [ "Heart attack is a global leading cause of death for both gender and the occurrence is not always known to us. Usually Heart Rate Calculation has traditionally been conducted using specialized hardware or device. It used most commonly in the form of pulse oximeters or Electrocardiogram devices. though these devices have higher method and they are reliable to normal user. However, these devices require users to perform their process. In this paper, we propose a system capable of estimating the heart beat rate using just a camera from a commercially available s mart phone and also using a mobile stethoscope to record heart sound for detecting the occurrence of heart attack and also some ot her heart related disease. Fuzzy Logic is used here, which is a part of Data Mining, the expert problem solution for human illness. In general, case people could not u nderstand whenever they face this problem and this is the main cause of death. Our research is about to determine this problem earlier to reduce the death rate of heart attack. The advantage of this method is that the user does not need specialized hardware and he she can take a measurement in virtual ly any place under almost any circumstances. In addition, the measurement can be used as a tool for health coaching applications or effective telecare services aimed in enhancing the user's well being.", "A mobile medical device, dubbed HeartSaver, is developed for real-time monitoring of a patient's electrocardiogram (ECG) and automatic detection of several cardiac pathologies, including atrial fibrillation, myocardial infarction and atrio-ventricular block. HeartSaver is based on adroit integration of four different modern technologies: electronics, wireless communication, computer, and information technologies in the service of medicine. The physical device consists of four modules: sensor and ECG processing unit, a microcontroller, a link between the microcontroller and the cell phone, and mobile software associated with the system. HeartSaver includes automated cardiac pathology detection algorithms. These algorithms are simple enough to be implemented on a low-cost, limited-power microcontroller but powerful enough to detect the relevant cardiac pathologies. When an abnormality is detected, the microcontroller sends a signal to a cell phone. This operation triggers an application software on the cell phone that sends a text message transmitting information about patient's physiological condition and location promptly to a physician or a guardian. HeartSaver can be used by millions of cardiac patients with the potential to transform the cardiac diagnosis, care, and treatment and save thousands of lives." ] }
1904.07982
2936246822
Community question-answering (CQA) platforms have become very popular forums for asking and answering questions daily. While these forums are rich repositories of community knowledge, they present challenges for finding relevant answers and similar questions, due to the open-ended nature of informal discussions. Further, if the platform allows questions and answers in multiple languages, we are faced with the additional challenge of matching cross-lingual information. In this work, we focus on the cross-language question re-ranking shared task, which aims to find existing questions that may be written in different languages. Our contribution is an exploration of query expansion techniques for this problem. We investigate expansions based on Word Embeddings, DBpedia concepts linking, and Hypernym, and show that they outperform existing state-of-the-art methods.
Aiming to retrieve information in a language different from the query language, a wide range of research has been done in CLIR @cite_12 @cite_6 @cite_15 @cite_23 @cite_2 @cite_26 . To improve the search performance of a CLIR system, researchers have been giving more importance on QE techniques, such as, using of external lexical database @cite_0 , co-occurrence analysis @cite_8 and pseudo-relevance feedback @cite_21 @cite_9 . @cite_5 , @cite_18 and @cite_11 presented QE techniques using Word Embedding. A different approach using external knowledge base were developed by @cite_10 and @cite_13 to expand queries in CLIR.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_26", "@cite_8", "@cite_9", "@cite_21", "@cite_6", "@cite_0", "@cite_23", "@cite_2", "@cite_5", "@cite_15", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2538116949", "2537515450", "", "1979459060", "", "1964348731", "", "2081580037", "", "", "2740848765", "", "2067506377", "2204449939", "2950762699" ], "abstract": [ "In recent years, the amount of entities in large knowledge bases available on the Web has been increasing rapidly, making it possible to propose new ways of intelligent information access. Within the context of globalization, there is a clear need for techniques and systems that can enable multilingual and cross-lingual information access. In this paper, we present XKnowSearch!, a novel entity-based system for multilingual and cross-lingual information retrieval, which supports keyword search and also allows users to influence the search process according to their search intents. By leveraging the multilingual knowledge base on the Web, keyword queries and documents can be represented in their semantic forms, which can facilitate query disambiguation and expansion, and can also overcome the language barrier between queries and documents in different languages.", "We present a suite of query expansion methods that are based on word embeddings. Using Word2Vec's CBOW embedding approach, applied over the entire corpus on which search is performed, we select terms that are semantically related to the query. Our methods either use the terms to expand the original query or integrate them with the effective pseudo-feedback-based relevance model. In the former case, retrieval performance is significantly better than that of using only the query, and in the latter case the performance is significantly better than that of the relevance model.", "", "Automatic query expansion has long been suggested as a technique for dealing with the fundamental issue of word mismatch in information retrieval. A number of approaches to expansion have been studied and, more recently, attention has focused on techniques that analyze the corpus to discover word relationship (global techniques) and those that analyze documents retrieved by the initial query ( local feedback). In this paper, we compare the effectiveness of these approaches and show that, although global analysis haa some advantages, local analysia is generally more effective. We also show that using global analysis techniques.", "", "The language modeling approach to retrieval has been shown to perform well empirically. One advantage of this new approach is its statistical foundations. However, feedback, as one important component in a retrieval system, has only been dealt with heuristically in this new retrieval approach: the original query is usually literally expanded by adding additional terms to it. Such expansion-based feedback creates an inconsistent interpretation of the original and the expanded query. In this paper, we present a more principled approach to feedback in the language modeling approach. Specifically, we treat feedback as updating the query language model based on the extra evidence carried by the feedback documents. Such a model-based feedback strategy easily fits into an extension of the language modeling approach. We propose and evaluate two different approaches to updating a query language model based on feedback documents, one based on a generative probabilistic model of feedback documents and one based on minimization of the KL-divergence over feedback documents. Experiment results show that both approaches are effective and outperform the Rocchio feedback approach.", "", "Because meaningful sentences are composed of meaningful words, any system that hopes to process natural languages as people do must have information about words and their meanings. This information is traditionally provided through dictionaries, and machine-readable dictionaries are now widely available. But dictionary entries evolved for the convenience of human readers, not for machines. WordNet 1 provides a more effective combination of traditional lexicographic information and modern computing. WordNet is an online lexical database designed for use under program control. English nouns, verbs, adjectives, and adverbs are organized into sets of synonyms, each representing a lexicalized concept. Semantic relations link the synonym sets [4].", "", "", "Although information retrieval models based on Markov Random Fields (MRF), such as Sequential Dependence Model and Weighted Sequential Dependence Model (WSDM), have been shown to outperform bag-of-words probabilistic and language modeling retrieval models by taking into account term dependencies, it is not known how to effectively account for term dependencies in query expansion methods based on pseudo-relevance feedback (PRF) for retrieval models of this type. In this paper, we propose Semantic Weighted Dependence Model (SWDM), a PRF based query expansion method for WSDM, which utilizes distributed low-dimensional word representations (i.e., word embeddings). Our method finds the closest unigrams to each query term in the embedding space and top retrieved documents and directly incorporates them into the retrieval function of WSDM. Experiments on TREC datasets indicate statistically significant improvement of SWDM over state-of-the-art MRF retrieval models, PRF methods for MRF retrieval models and embedding based query expansion methods for bag-of-words retrieval models.", "", "Large knowledge bases are being developed to describe entities, their attributes, and their relationships to other entities. Prior research mostly focuses on the construction of knowledge bases, while how to use them in information retrieval is still an open problem. This paper presents a simple and effective method of using one such knowledge base, Freebase, to improve query expansion, a classic and widely studied information retrieval task. It investigates two methods of identifying the entities associated with a query, and two methods of using those entities to perform query expansion. A supervised model combines information derived from Freebase descriptions and categories to select terms that are effective for query expansion. Experiments on the ClueWeb09 dataset with TREC Web Track queries demonstrate that these methods are almost 30 more effective than strong, state-of-the-art query expansion algorithms. In addition to improving average performance, some of these methods have better win loss ratios than baseline algorithms, with 50 fewer queries damaged.", "Search for information is no longer exclusively limited within the native language of the user, but is more and more extended to other languages. This gives rise to the problem of cross-language information retrieval (CLIR), whose goal is to find relevant information written in a different language to a query. In addition to the problems of monolingual information retrieval (IR), translation is the key problem in CLIR: one should translate either the query or the documents from a language to another. However, this translation problem is not identical to full-text machine translation (MT): the goal is not to produce a human-readable translation, but a translation suitable for finding relevant documents. Specific translation methods are thus required. The goal of this book is to provide a comprehensive description of the specifi c problems arising in CLIR, the solutions proposed in this area, as well as the remaining problems. The book starts with a general description of the monolingual IR and CLIR problems. Different classes of approaches to translation are then presented: approaches using an MT system, dictionary-based translation and approaches based on parallel and comparable corpora. In addition, the typical retrieval effectiveness using different approaches is compared. It will be shown that translation approaches specifically designed for CLIR can rival and outperform high-quality MT systems. Finally, the book offers a look into the future that draws a strong parallel between query expansion in monolingual IR and query translation in CLIR, suggesting that many approaches developed in monolingual IR can be adapted to CLIR. The book can be used as an introduction to CLIR. Advanced readers can also find more technical details and discussions about the remaining research challenges in the future. It is suitable to new researchers who intend to carry out research on CLIR.", "Continuous space word embeddings have received a great deal of attention in the natural language processing and machine learning communities for their ability to model term similarity and other relationships. We study the use of term relatedness in the context of query expansion for ad hoc information retrieval. We demonstrate that word embeddings such as word2vec and GloVe, when trained globally, underperform corpus and query specific embeddings for retrieval tasks. These results suggest that other tasks benefiting from global embeddings may also benefit from local embeddings." ] }
1904.07850
2935837427
Detection identifies objects as axis-aligned boxes in an image. Most successful object detectors enumerate a nearly exhaustive list of potential object locations and classify each. This is wasteful, inefficient, and requires additional post-processing. In this paper, we take a different approach. We model an object as a single point --- the center point of its bounding box. Our detector uses keypoint estimation to find center points and regresses to all other object properties, such as size, 3D location, orientation, and even pose. Our center point based approach, CenterNet, is end-to-end differentiable, simpler, faster, and more accurate than corresponding bounding box based detectors. CenterNet achieves the best speed-accuracy trade-off on the MS COCO dataset, with 28.1 AP at 142 FPS, 37.4 AP at 52 FPS, and 45.1 AP with multi-scale testing at 1.4 FPS. We use the same approach to estimate 3D bounding box in the KITTI benchmark and human pose on the COCO keypoint dataset. Our method performs competitively with sophisticated multi-stage methods and runs in real-time.
One of the first successful deep object detectors, RCNN @cite_31 , enumerates object location from a large set of region candidates @cite_23 , crops them, and classifies each using a deep network. Fast-RCNN @cite_48 crops image features instead, to save computation. However, both methods rely on slow low-level region proposal methods.
{ "cite_N": [ "@cite_48", "@cite_31", "@cite_23" ], "mid": [ "", "2102605133", "2088049833" ], "abstract": [ "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )." ] }
1904.07850
2935837427
Detection identifies objects as axis-aligned boxes in an image. Most successful object detectors enumerate a nearly exhaustive list of potential object locations and classify each. This is wasteful, inefficient, and requires additional post-processing. In this paper, we take a different approach. We model an object as a single point --- the center point of its bounding box. Our detector uses keypoint estimation to find center points and regresses to all other object properties, such as size, 3D location, orientation, and even pose. Our center point based approach, CenterNet, is end-to-end differentiable, simpler, faster, and more accurate than corresponding bounding box based detectors. CenterNet achieves the best speed-accuracy trade-off on the MS COCO dataset, with 28.1 AP at 142 FPS, 37.4 AP at 52 FPS, and 45.1 AP with multi-scale testing at 1.4 FPS. We use the same approach to estimate 3D bounding box in the KITTI benchmark and human pose on the COCO keypoint dataset. Our method performs competitively with sophisticated multi-stage methods and runs in real-time.
Faster RCNN @cite_5 generates region proposal within the detection network. It samples fixed-shape bounding boxes (anchors) around a low-resolution image grid and classifies each into foreground or not''. An anchor is labeled foreground with a @math overlap with any ground truth object, background with a @math overlap, or ignored otherwise. Each generated region proposal is again classified @cite_48 . Changing the proposal classifier to a multi-class classification forms the basis of one-stage detectors. Several improvements to one-stage detectors include anchor shape priors @cite_13 @cite_26 , different feature resolution @cite_46 , and loss re-weighting among different samples @cite_56 .
{ "cite_N": [ "@cite_26", "@cite_48", "@cite_56", "@cite_5", "@cite_46", "@cite_13" ], "mid": [ "2796347433", "", "2963351448", "2613718673", "2193145675", "2570343428" ], "abstract": [ "We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL", "", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. Using a novel, multi-scale training method the same YOLOv2 model can run at varying sizes, offering an easy tradeoff between speed and accuracy. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that dont have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. YOLO9000 predicts detections for more than 9000 different object categories, all in real-time." ] }
1904.07850
2935837427
Detection identifies objects as axis-aligned boxes in an image. Most successful object detectors enumerate a nearly exhaustive list of potential object locations and classify each. This is wasteful, inefficient, and requires additional post-processing. In this paper, we take a different approach. We model an object as a single point --- the center point of its bounding box. Our detector uses keypoint estimation to find center points and regresses to all other object properties, such as size, 3D location, orientation, and even pose. Our center point based approach, CenterNet, is end-to-end differentiable, simpler, faster, and more accurate than corresponding bounding box based detectors. CenterNet achieves the best speed-accuracy trade-off on the MS COCO dataset, with 28.1 AP at 142 FPS, 37.4 AP at 52 FPS, and 45.1 AP with multi-scale testing at 1.4 FPS. We use the same approach to estimate 3D bounding box in the KITTI benchmark and human pose on the COCO keypoint dataset. Our method performs competitively with sophisticated multi-stage methods and runs in real-time.
Our approach is closely related to anchor-based one-stage approaches @cite_6 @cite_46 @cite_56 . A center point can be seen as a single shape-agnostic anchor (see Figure ). However, there are a few important differences. First, our CenterNet assigns the anchor'' based solely on location, not box overlap @cite_48 . We have no manual thresholds @cite_48 for foreground and background classification. Second, we only have one positive anchor'' per object, and hence do not need Non-Maximum Suppression (NMS) @cite_17 . We simply extract local peaks in the keypoint heatmap @cite_1 @cite_30 . Third, CenterNet uses a larger output resolution (output stride of @math ) compared to traditional object detectors @cite_10 @cite_0 (output stride of @math ). This eliminates the need for multiple anchors @cite_22 .
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_48", "@cite_1", "@cite_6", "@cite_56", "@cite_0", "@cite_46", "@cite_10", "@cite_17" ], "mid": [ "2559085405", "2768489488", "", "2555751471", "2963037989", "2963351448", "", "2193145675", "2194775991", "" ], "abstract": [ "We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.", "An analysis of different techniques for recognizing and detecting objects under extreme scale variation is presented. Scale specific and scale invariant design of detectors are compared by training them with different configurations of input data. By evaluating the performance of different network architectures for classifying small objects on ImageNet, we show that CNNs are not robust to changes in scale. Based on this analysis, we propose to train and test detectors on the same scales of an image-pyramid. Since small and large objects are difficult to recognize at smaller and larger scales respectively, we present a novel training scheme called Scale Normalization for Image Pyramids (SNIP) which selectively back-propagates the gradients of object instances of different sizes as a function of the image scale. On the COCO dataset, our single model performance is 45.7 and an ensemble of 3 networks obtains an mAP of 48.3 . We use off-the-shelf ImageNet-1000 pre-trained models and only train with bounding box supervision. Our submission won the Best Student Entry in the COCO 2017 challenge. Code will be made available at http: bit.ly 2yXVg4c.", "", "We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to multi-person pose estimation and report state-of-the-art performance on the MPII and MS-COCO datasets.", "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors.", "", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "" ] }
1904.07850
2935837427
Detection identifies objects as axis-aligned boxes in an image. Most successful object detectors enumerate a nearly exhaustive list of potential object locations and classify each. This is wasteful, inefficient, and requires additional post-processing. In this paper, we take a different approach. We model an object as a single point --- the center point of its bounding box. Our detector uses keypoint estimation to find center points and regresses to all other object properties, such as size, 3D location, orientation, and even pose. Our center point based approach, CenterNet, is end-to-end differentiable, simpler, faster, and more accurate than corresponding bounding box based detectors. CenterNet achieves the best speed-accuracy trade-off on the MS COCO dataset, with 28.1 AP at 142 FPS, 37.4 AP at 52 FPS, and 45.1 AP with multi-scale testing at 1.4 FPS. We use the same approach to estimate 3D bounding box in the KITTI benchmark and human pose on the COCO keypoint dataset. Our method performs competitively with sophisticated multi-stage methods and runs in real-time.
3D bounding box estimation powers autonomous driving @cite_34 . Deep3Dbox @cite_8 uses a slow-RCNN @cite_31 style framework, by first detecting 2D objects @cite_5 and then feeding each object into a 3D estimation network. 3D RCNN @cite_4 adds an additional head to Faster-RCNN @cite_5 followed by a 3D projection. Deep Manta @cite_52 uses a coarse-to-fine Faster-RCNN @cite_5 trained on many tasks. Our method is similar to a one-stage version of Deep3Dbox @cite_8 or 3DRCNN @cite_4 . As such, CenterNet is much simpler and faster than competing methods.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_52", "@cite_5", "@cite_31", "@cite_34" ], "mid": [ "2799123546", "2560544142", "2605189827", "2613718673", "2102605133", "2150066425" ], "abstract": [ "We present a fast inverse-graphics framework for instance-level 3D scene understanding. We train a deep convolutional network that learns to map image regions to the full 3D shape and pose of all object instances in the image. Our method produces a compact 3D representation of the scene, which can be readily used for applications like autonomous driving. Many traditional 2D vision outputs, like instance segmentations and depth-maps, can be obtained by simply rendering our output 3D scene model. We exploit class-specific shape priors by learning a low dimensional shape-space from collections of CAD models. We present novel representations of shape and pose, that strive towards better 3D equivariance and generalization. In order to exploit rich supervisory signals in the form of 2D annotations like segmentation, we propose a differentiable Render-and-Compare loss that allows 3D shape and pose to be learned with 2D supervision. We evaluate our method on the challenging real-world datasets of Pascal3D+ and KITTI, where we achieve state-of-the-art results.", "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26].", "In this paper, we present a novel approach, called Deep MANTA (Deep Many-Tasks), for many-task vehicle analysis from a given image. A robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation. Its architecture is based on a new coarse-to-fine object proposal that boosts the vehicle detection. Moreover, the Deep MANTA network is able to localize vehicle parts even if these parts are not visible. In the inference, the networks outputs are used by a real time robust pose estimation algorithm for fine orientation estimation and 3D vehicle localization. We show in experiments that our method outperforms monocular state-of-the-art approaches on vehicle detection, orientation and 3D location tasks on the very challenging KITTI benchmark.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net datasets kitti" ] }
1904.07683
2938914922
We show that the product of an nx3 matrix and a 3x3 matrix over a commutative ring can be computed using 6n+3 multiplications. For two 3x3 matrices this gives us an algorithm using 21 multiplications. This is an improvement with respect to Makarov algorithm using 22 multiplications[10]. We generalize our result for nx3 and 3x3 matrices and present an algorithm for computing the product of an lxn matrix and an nxm matrix over a commutative ring for odd n using n(lm+l+m-1) 2 multiplications if m is odd and using (n(lm+l+m-1)+l-1) 2 multiplications if m is even. Waksman algorithm for odd n needs (n-1)(lm+l+m-1) 2+lm multiplications[16], thus in both cases less multiplications are required by our algorithm.
In this section we present some related work. We start with presenting some results about multiplication of two square matrices. Note that a matrix multiplication algorithm can only be applied recursively if commutativity is not used. Since Strassen showed in 1969 that the product of two matrices can be computed using @math arithmetic operations @cite_10 and since it is shown that for @math matrices @math is the optimal number of multiplications @cite_13 @cite_6 , it is interesting to study @math matrices for @math , to obtain an even faster algorithm for matrix multiplication. For @math matrices @math multiplications would be needed to obtain an even faster algorithm than Strassen's since @math . In 1976 Laderman obtained a non-commutative @math algorithm using only @math multiplications @cite_7 . It is not known if there exists a non-commutative @math algorithm that uses @math or less multiplications. For @math matrices the best non-commutative result so far is @math multiplications by Sedoglavic @cite_15 which is an improvement on Makarov's algorithm for @math matrices @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_10", "@cite_6", "@cite_15", "@cite_13" ], "mid": [ "2006785052", "", "2035476608", "", "2738523341", "2539150916" ], "abstract": [ "Abstract A non-commutative algorithm is proposed for multiplying square 5 × 5 matrices that uses one hundred multiplications. Its application to 7 × 7, 10 × 10 and 15 × 15 matrices allows algorithms to be constructed that require 273, 700 and 2300 multiplications respectively.", "", "t. Below we will give an algorithm which computes the coefficients of the product of two square matrices A and B of order n from the coefficients of A and B with tess than 4 . 7 n l°g7 arithmetical operations (all logarithms in this paper are for base 2, thus tog 7 2.8; the usual method requires approximately 2n 3 arithmetical operations). The algorithm induces algorithms for invert ing a matr ix of order n, solving a system of n linear equations in n unknowns, comput ing a determinant of order n etc. all requiring less than const n l°g 7 arithmetical operations. This fact should be compared with the result of KLYUYEV and KOKOVKINSHCHERBAK [1 ] tha t Gaussian elimination for solving a system of l inearequations is optimal if one restricts oneself to operations upon rows and columns as a whole. We also note tha t WlNOGRAD [21 modifies the usual algorithms for matr ix multiplication and inversion and for solving systems of linear equations, trading roughly half of the multiplications for additions and subtractions. I t is a pleasure to thank D. BRILLINGER for inspiring discussions about the present subject and ST. COOK and B. PARLETT for encouraging me to write this paper. 2. We define algorithms e , which mult iply matrices of order m2 , by induction on k: , 0 is the usual algorithm, for matr ix multiplication (requiring m a multiplications and m 2 ( m t) additions), e ,k already being known, define , +t as follows: If A, B are matrices of order m 2 k to be multiplied, write", "", "We present a non-commutative algorithm for multiplying 5 × 5 matrices using 99 multiplications. This algorithm is a minor modification of Makarov [4] algorithm.", "This paper develops techniques for establishing a lower bound on the number of arithmetic operations necessary for sets of simple expressions. The techniques are applied to matrix multiplication. A modification of Strassen's algorithm is developed for multiplying n × p matrices by p × q matrices. The techniques are used to prove that this algorithm minimizes the number of multiplications for a few special cases. In so doing we establish that matrix multiplication with elements from a commutative ring requires fewer multiplications than with elements from a non-commutative ring." ] }
1904.07683
2938914922
We show that the product of an nx3 matrix and a 3x3 matrix over a commutative ring can be computed using 6n+3 multiplications. For two 3x3 matrices this gives us an algorithm using 21 multiplications. This is an improvement with respect to Makarov algorithm using 22 multiplications[10]. We generalize our result for nx3 and 3x3 matrices and present an algorithm for computing the product of an lxn matrix and an nxm matrix over a commutative ring for odd n using n(lm+l+m-1) 2 multiplications if m is odd and using (n(lm+l+m-1)+l-1) 2 multiplications if m is even. Waksman algorithm for odd n needs (n-1)(lm+l+m-1) 2+lm multiplications[16], thus in both cases less multiplications are required by our algorithm.
Hopcroft and Musinski showed in @cite_16 that the number of multiplications to compute the product of an @math matrix and an @math matrix is the same number that is required to compute the product of an @math matrix and an @math matrix and of an @math matrix and an @math matrix etc. This means if one computes an algorithm for the product of an @math matrix and an @math matrix using @math multiplications there exists a matrix product algorithm for @math matrices using @math multiplications overall. This algorithm for square matrices will then have an exponent of @math .
{ "cite_N": [ "@cite_16" ], "mid": [ "2053839977" ], "abstract": [ "The paper considers the complexity of bilinear forms in a noncommutative ring. The dual of a computation is defined and applied to matrix multiplication and other bilinear forms. It is shown that the dual of an optimal computation gives an optimal computation for a dual problem. An @math by @math matrix product is shown to be the dual of an @math by @math or an @math by @math matrix product, implying that each of the matrix products requires the same number of multiplications to compute. Finally, an algorithm for computing a single bilinear form over a noncommutative ring with a minimum number of multiplications is derived by considering a dual problem." ] }
1904.07683
2938914922
We show that the product of an nx3 matrix and a 3x3 matrix over a commutative ring can be computed using 6n+3 multiplications. For two 3x3 matrices this gives us an algorithm using 21 multiplications. This is an improvement with respect to Makarov algorithm using 22 multiplications[10]. We generalize our result for nx3 and 3x3 matrices and present an algorithm for computing the product of an lxn matrix and an nxm matrix over a commutative ring for odd n using n(lm+l+m-1) 2 multiplications if m is odd and using (n(lm+l+m-1)+l-1) 2 multiplications if m is even. Waksman algorithm for odd n needs (n-1)(lm+l+m-1) 2+lm multiplications[16], thus in both cases less multiplications are required by our algorithm.
We present some examples of non-square matrix multiplication algorithms. In @cite_1 Hopcroft and Kerr showed that the product of a @math matrix and a @math matrix can be multiplied using @math multiplications without using commutativity. In the case @math this gives an algorithm using @math multiplications. Combined with the results of @cite_16 this gives an algorithm for @math matrices using @math multiplications and an exponent of @math . Smirnov obtained an algorithm for the product of a @math matrix and a @math matrix using @math multiplications @cite_0 . By @cite_16 this gives an algorithm for @math matrices using @math multiplications and an exponent of @math .
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_1" ], "mid": [ "2004948173", "2053839977", "1986272312" ], "abstract": [ "A method for deriving bilinear algorithms for matrix multiplication is proposed. New estimates for the bilinear complexity of a number of problems of the exact and approximate multiplication of rectangular matrices are obtained. In particular, the estimate for the boundary rank of multiplying 3 × 3 matrices is improved and a practical algorithm for the exact multiplication of square n × n matrices is proposed. The asymptotic arithmetic complexity of this algorithm is O(n2.7743).", "The paper considers the complexity of bilinear forms in a noncommutative ring. The dual of a computation is defined and applied to matrix multiplication and other bilinear forms. It is shown that the dual of an optimal computation gives an optimal computation for a dual problem. An @math by @math matrix product is shown to be the dual of an @math by @math or an @math by @math matrix product, implying that each of the matrix products requires the same number of multiplications to compute. Finally, an algorithm for computing a single bilinear form over a noncommutative ring with a minimum number of multiplications is derived by considering a dual problem.", "This paper develops an algorithm to multiply a px2 matrix by a 2xn matrix in @math multiplications for matrix multiplication without commutativity. The algorithm minimizes the number of multiplications for matrix multiplication without commutativity for the special cases p=1 or 2, n=1,2, @math and p = 3, n = 3. It is shown that with commutativity fewer multiplications are required." ] }
1904.07683
2938914922
We show that the product of an nx3 matrix and a 3x3 matrix over a commutative ring can be computed using 6n+3 multiplications. For two 3x3 matrices this gives us an algorithm using 21 multiplications. This is an improvement with respect to Makarov algorithm using 22 multiplications[10]. We generalize our result for nx3 and 3x3 matrices and present an algorithm for computing the product of an lxn matrix and an nxm matrix over a commutative ring for odd n using n(lm+l+m-1) 2 multiplications if m is odd and using (n(lm+l+m-1)+l-1) 2 multiplications if m is even. Waksman algorithm for odd n needs (n-1)(lm+l+m-1) 2+lm multiplications[16], thus in both cases less multiplications are required by our algorithm.
In @cite_9 optimized the number of required multiplications of small matrices up to @math matrices. They considered non-commutative and commutative algorithms. Combined with our results for commutative rings we suppose that some results could be improved.
{ "cite_N": [ "@cite_9" ], "mid": [ "1989290453" ], "abstract": [ "The complexity of matrix multiplication has attracted a lot of attention in the last forty years. In this paper, instead of considering asymptotic aspects of this problem, we are interested in reducing the cost of multiplication for matrices of small size, say up to 30. Following the previous work of Probert & Fischer, Smith, and Mezzarobba, in a similar vein, we base our approach on the previous algorithms for small matrices, due to Strassen, Winograd, Pan, Laderman, and others and show how to exploit these standard algorithms in an improved way. We illustrate the use of our results by generating multiplication codes over various rings, such as integers, polynomials, differential operators and linear recurrence operators." ] }
1904.07463
1978277032
Program invariants are important for defect detection, program verification, and program repair. However, existing techniques have limited support for important classes of invariants such as disjunctions, which express the semantics of conditional statements. We propose a method for generating disjunctive invariants over numerical domains, which are inexpressible using classical convex polyhedra. Using dynamic analysis and reformulating the problem in non-standard max-plus'' and min-plus'' algebras, our method constructs hulls over program trace points. Critically, we introduce and infer a weak class of such invariants that balances expressive power against the computational cost of generating nonconvex shapes in high dimensions. Existing dynamic inference techniques often generate spurious invariants that fit some program traces but do not generalize. With the insight that generating dynamic invariants is easy, we propose to verify these invariants statically using k-inductive SMT theorem proving which allows us to validate invariants that are not classically inductive. Results on difficult kernels involving nonlinear arithmetic and abstract arrays suggest that this hybrid approach efficiently generates and proves correct program invariants.
Dynamic invariant analyses. Daikon @cite_6 is a popular and influential dynamic invariant analysis that infers candidate invariants using templates. Daikon comes with a large list of invariant templates and returns those that hold over a set of program traces. Daikon can use splitting'' conditions @cite_21 to find disjunctive invariants such as @math ''. Our algorithm does not depend on splitting conditions and our max- and min-plus disjunctive invariants are more expressive than those currently supported by Daikon.
{ "cite_N": [ "@cite_21", "@cite_6" ], "mid": [ "89174838", "2110908283" ], "abstract": [ "Explicitly stated program invariants can help programmers by characterizing aspects of program execution and identifying program properties that must be preserved when modifying code; invariants can also be of assistance to automated tools. Unfortunately, these invariants are usually absent from code. Previous work showed how to dynamically detect invariants by looking for patterns in and relationships among variable values captured in program traces. A prototype implementation, Daikon, recovered invariants from formallyspecified programs, and the invariants it detected assisted programmers in a software evolution task. However, it was limited to finding invariants over scalars and arrays. This paper presents two techniques that enable discovery of invariants over richer data structures, in particular collections of data represented by recursive data structures, by indirect links through tables, etc. The first technique is to traverse these collections and record them as arrays in the program traces; then the basic Daikon invariant detector can infer invariants over these new trace elements. The second technique enables discovery of conditional invariants, which are necessary for reporting invariants over recursive data structures and are also useful in their own right. These techniques permit detection of invariants such as “p.value > limit or p.left ∈ mytree”, The techniques are validated by successful application to two sets of programs: simple textbook data structures and student solutions to a weighted digraph problem.", "Daikon is an implementation of dynamic detection of likely invariants; that is, the Daikon invariant detector reports likely program invariants. An invariant is a property that holds at a certain point or points in a program; these are often used in assert statements, documentation, and formal specifications. Examples include being constant (x=a), non-zero (x 0), being in a range (a@?x@?b), linear relationships (y=ax+b), ordering (x@?y), functions from a library (x=fn(y)), containment (x@?y), sortedness (xissorted), and many more. Users can extend Daikon to check for additional invariants. Dynamic invariant detection runs a program, observes the values that the program computes, and then reports properties that were true over the observed executions. Dynamic invariant detection is a machine learning technique that can be applied to arbitrary data. Daikon can detect invariants in C, C++, Java, and Perl programs, and in record-structured data sources; it is easy to extend Daikon to other applications. Invariants can be useful in program understanding and a host of other applications. Daikon's output has been used for generating test cases, predicting incompatibilities in component integration, automating theorem proving, repairing inconsistent data structures, and checking the validity of data streams, among other tasks. Daikon is freely available in source and binary form, along with extensive documentation, at http: pag.csail.mit.edu daikon ." ] }
1904.07463
1978277032
Program invariants are important for defect detection, program verification, and program repair. However, existing techniques have limited support for important classes of invariants such as disjunctions, which express the semantics of conditional statements. We propose a method for generating disjunctive invariants over numerical domains, which are inexpressible using classical convex polyhedra. Using dynamic analysis and reformulating the problem in non-standard max-plus'' and min-plus'' algebras, our method constructs hulls over program trace points. Critically, we introduce and infer a weak class of such invariants that balances expressive power against the computational cost of generating nonconvex shapes in high dimensions. Existing dynamic inference techniques often generate spurious invariants that fit some program traces but do not generalize. With the insight that generating dynamic invariants is easy, we propose to verify these invariants statically using k-inductive SMT theorem proving which allows us to validate invariants that are not classically inductive. Results on difficult kernels involving nonlinear arithmetic and abstract arrays suggest that this hybrid approach efficiently generates and proves correct program invariants.
Recently, Sharma @cite_7 proposed a machine-learning based approach to find disjunctive invariants. Their method operates on traces representing good and bad program states: good traces are obtained by running the program on random inputs and bad traces correspond to runs on which an assertion or postcondition is violated. They use a machine learning model to find a predicate, representing a candidate program invariant, that separates the good and bad traces. For efficiency, they restrict attention to the octagon domain and search only for predicates that are arbitrary boolean combinations of octagon inequalities. Finally, they use standard induction technique to check the candidate invariants using Z3 @cite_41 . While our method shares their focus on disjunctive invariants, a key difference is that the strength of their results depends strictly on existing annotated program assertions. For example, in ex1 , if the line assert(y==11) is not provided by the programmer then their method will only produce the trivial invariant @math . By contrast, our approach does not make such assumptions about the input program, and in some sense the purpose of our approach is to generate those assertions.
{ "cite_N": [ "@cite_41", "@cite_7" ], "mid": [ "1480909796", "2280703106" ], "abstract": [ "Satisfiability Modulo Theories (SMT) problem is a decision problem for logical first order formulas with respect to combinations of background theories such as: arithmetic, bit-vectors, arrays, and uninterpreted functions. Z3 is a new and efficient SMT Solver freely available from Microsoft Research. It is used in various software verification and analysis applications.", "We formalize the problem of program verification as a learning problem, showing that invariants in program verification can be regarded as geometric concepts in machine learning. Safety properties define bad states: states a program should not reach. Program verification explains why a program’s set of reachable states is disjoint from the set of bad states. In Hoare Logic, these explanations are predicates that form inductive assertions. Using samples for reachable and bad states and by applying well known machine learning algorithms for classification, we are able to generate inductive assertions. By relaxing the search for an exact proof to classifiers, we obtain complexity theoretic improvements. Further, we extend the learning algorithm to obtain a sound procedure that can generate proofs containing invariants that are arbitrary boolean combinations of polynomial inequalities. We have evaluated our approach on a number of challenging benchmarks and the results are promising." ] }
1904.07463
1978277032
Program invariants are important for defect detection, program verification, and program repair. However, existing techniques have limited support for important classes of invariants such as disjunctions, which express the semantics of conditional statements. We propose a method for generating disjunctive invariants over numerical domains, which are inexpressible using classical convex polyhedra. Using dynamic analysis and reformulating the problem in non-standard max-plus'' and min-plus'' algebras, our method constructs hulls over program trace points. Critically, we introduce and infer a weak class of such invariants that balances expressive power against the computational cost of generating nonconvex shapes in high dimensions. Existing dynamic inference techniques often generate spurious invariants that fit some program traces but do not generalize. With the insight that generating dynamic invariants is easy, we propose to verify these invariants statically using k-inductive SMT theorem proving which allows us to validate invariants that are not classically inductive. Results on difficult kernels involving nonlinear arithmetic and abstract arrays suggest that this hybrid approach efficiently generates and proves correct program invariants.
Hybrid approaches. Nimmer and Ernst integrated the ESC Java static checker framework @cite_35 with Daikon, allowing them to validate candidate invariants using a Hoare logic verification approach. This work is very similar in motivation and architecture to ours. Key differences include our detection of richer disjunctive invariants, our verification with respect to full program correctness ( Rather than proving complete program correctness, ESC detects only certain types of errors'' [Sec. 2] Nimmer01staticverification ), and our larger evaluation (our system proves over four times as many non-redundant invariants valid and considers over four times as many benchmark kernels).
{ "cite_N": [ "@cite_35" ], "mid": [ "144724653" ], "abstract": [ "Abstract This paper shows how to integrate two complementary techniques for manipulating program invariants: dynamic detection and static verification. Dynamic detection proposes likely invariants based on program executions, but the resulting properties are not guaranteed to be true over all possible executions. Static verification checks that properties are always true, but it can be difficult and tedious to select a goal and to annotate programs for input to a static checker. Combining these techniques overcomes the weaknesses of each: dynamically detected invariants can annotate a program or provide goals for static verification, and static verification can confirm properties proposed by a dynamic tool. We have integrated a tool for dynamically detecting likely program invariants, Daikon, with a tool for statically verifying program properties, ESC Java. Daikon examines run-time values of program variables; it looks for patterns and relationships in those values, and it reports properties that are never falsified during test runs and that satisfy certain other conditions, such as being statistically justified. ESC Java takes as input a Java program annotated with preconditions, postconditions, and other assertions, and it reports which annotations cannot be statically verified and also warns of potential runtime errors, such as null dereferences and out-of-bounds array indices. Our prototype system runs Daikon, inserts its output into code as ESC Java annotations, and then runs ESC Java, which reports unverifiable annotations. The entire process is completely automatic, though users may provide guidance in order to improve results if desired. In preliminary experiments, ESC Java verified all or most of the invariants proposed by Daikon." ] }
1904.07463
1978277032
Program invariants are important for defect detection, program verification, and program repair. However, existing techniques have limited support for important classes of invariants such as disjunctions, which express the semantics of conditional statements. We propose a method for generating disjunctive invariants over numerical domains, which are inexpressible using classical convex polyhedra. Using dynamic analysis and reformulating the problem in non-standard max-plus'' and min-plus'' algebras, our method constructs hulls over program trace points. Critically, we introduce and infer a weak class of such invariants that balances expressive power against the computational cost of generating nonconvex shapes in high dimensions. Existing dynamic inference techniques often generate spurious invariants that fit some program traces but do not generalize. With the insight that generating dynamic invariants is easy, we propose to verify these invariants statically using k-inductive SMT theorem proving which allows us to validate invariants that are not classically inductive. Results on difficult kernels involving nonlinear arithmetic and abstract arrays suggest that this hybrid approach efficiently generates and proves correct program invariants.
Static max-plus analyses. The static analysis work of Allamigeon @cite_42 uses abstract interpretation to approximate program properties under the max- and min-plus domains. In contrast to our work, which computes max-plus formulas from dynamic traces, their method starts directly from a formula representing an initial approximation of the program state space and gradually improves that approximation based on the program structure until a fixed point is reached. As with other abstract interpretation approaches for inferring disjunctive invariants such as @cite_36 @cite_15 , their method uses an ad-hoc widening operator to ensure termination.
{ "cite_N": [ "@cite_36", "@cite_15", "@cite_42" ], "mid": [ "2585866695", "1844529821", "1499377964" ], "abstract": [ "The convexity of numerical domains such as polyhedra, octagons, intervals and linear equalities enables tractable analysis of software for buffer overflows, null pointer dereferences and floating point errors. However, convexity also causes the analysis to fail in many common cases. Powerset extensions can remedy this shortcoming by considering disjunctions of predicates. Unfortunately, analysis using powerset domains can be exponentially more expensive as compared to analysis on the base domain. In this paper, we prove structural properties of fixed points computed in commonly used powerset extensions. We show that a fixed point computed on a powerset extension is also a fixed point in the base domain computed on an elaboration of the program's CFG structure. Using this insight, we build analysis algorithms that approach path sensitive static analysis algorithms by performing the fixed point computation on the base domain while discovering an elaboration on the fly, Using restrictions on the nature of the elaborations, we design algorithms that scale polynomially in terms of the number of disjuncts. We have implemented a light-weight static analyzer for C programs with encouraging initial results.", "Polyhedral analysis [9] is an abstract interpretation used for automatic discovery of invariant linear inequalities among numerical variables of a program. Convexity of this abstract domain allows efficient analysis but also loses precision via convex-hull and widening operators. To selectively recover the loss of precision, sets of polyhedra (disjunctive elements) may be used to capture more precise invariants. However a balance must be struck between precision and cost. We introduce the notion of affinity to characterize how closely related is a pair of polyhedra. Finding related elements in the polyhedron (base) domain allows the formulation of precise hull and widening operators lifted to the disjunctive (powerset extension of the) polyhedron domain. We have implemented a modular static analyzer based on the disjunctive polyhedral analysis where the relational domain and the proposed operators can progressively enhance precision at a reasonable cost.", "We introduce a new numerical abstract domain able to infer min and max invariants over the program variables, based on max-plus polyhedra. Our abstraction is more precise than octagons, and allows to express non-convex properties without any disjunctive representations. We have defined sound abstract operators, evaluated their complexity, and implemented them in a static analyzer. It is able to automatically compute precise properties on numerical and memory manipulating programs such as algorithms on strings and arrays." ] }
1904.07463
1978277032
Program invariants are important for defect detection, program verification, and program repair. However, existing techniques have limited support for important classes of invariants such as disjunctions, which express the semantics of conditional statements. We propose a method for generating disjunctive invariants over numerical domains, which are inexpressible using classical convex polyhedra. Using dynamic analysis and reformulating the problem in non-standard max-plus'' and min-plus'' algebras, our method constructs hulls over program trace points. Critically, we introduce and infer a weak class of such invariants that balances expressive power against the computational cost of generating nonconvex shapes in high dimensions. Existing dynamic inference techniques often generate spurious invariants that fit some program traces but do not generalize. With the insight that generating dynamic invariants is easy, we propose to verify these invariants statically using k-inductive SMT theorem proving which allows us to validate invariants that are not classically inductive. Results on difficult kernels involving nonlinear arithmetic and abstract arrays suggest that this hybrid approach efficiently generates and proves correct program invariants.
The recent static analysis work of Kapur @cite_2 uses quantifier elimination to find max invariants over pairs of variables. Their method uses table look-ups to modify max relations based on the program structures (e.g., to determine how the max relation is changed after an assignment @math ). For scalability the approach restricts attention to specific program constructs. For example, they only support analysis on assignments or guards that do not involve multiplication.
{ "cite_N": [ "@cite_2" ], "mid": [ "39834257" ], "abstract": [ "Geometric heuristics for the quantifier elimination approach presented by Kapur (2004) are investigated to automatically derive loop invariants expressing weakly relational numerical properties (such as l≤x≤h or l≤±x ±y≤h) for imperative programs. Such properties have been successfully used to analyze commercial software consisting of hundreds of thousands of lines of code (using for example, the Astree tool based on abstract interpretation framework proposed by Cousot and his group). The main attraction of the proposed approach is its much lower complexity in contrast to the abstract interpretation approach (O(n2) in contrast to O(n4), where n is the number of variables) with the ability to still generate invariants of comparable strength. This approach has been generalized to consider disjunctive invariants of the similar form, expressed using maximum function (such as max (x+a,y+b,z+c,d)≤max (x+e,y+f,z+g,h)), thus enabling automatic generation of a subclass of disjunctive invariants for imperative programs as well." ] }
1904.07463
1978277032
Program invariants are important for defect detection, program verification, and program repair. However, existing techniques have limited support for important classes of invariants such as disjunctions, which express the semantics of conditional statements. We propose a method for generating disjunctive invariants over numerical domains, which are inexpressible using classical convex polyhedra. Using dynamic analysis and reformulating the problem in non-standard max-plus'' and min-plus'' algebras, our method constructs hulls over program trace points. Critically, we introduce and infer a weak class of such invariants that balances expressive power against the computational cost of generating nonconvex shapes in high dimensions. Existing dynamic inference techniques often generate spurious invariants that fit some program traces but do not generalize. With the insight that generating dynamic invariants is easy, we propose to verify these invariants statically using k-inductive SMT theorem proving which allows us to validate invariants that are not classically inductive. Results on difficult kernels involving nonlinear arithmetic and abstract arrays suggest that this hybrid approach efficiently generates and proves correct program invariants.
Uses of @math -induction. The application of @math -induction is becoming increasingly popular for formulas that may not admit classic induction. Sheeran applied @math -induction to verify hardware designs using SAT solvers @cite_30 . The PKIND model checker of Kahsai and Tinelli @cite_40 @cite_22 uses @math -induction and SAT SMT solvers to verify synchronous programs in the Lustre language. Recently, Donaldson @cite_19 applied @math -induction to imperative programs with multiple loops. A key distinction between our KIP architecture and these approaches is that none of them offers all four of the other properties (SMT, lemma re-use, redundancy elimination and parallelism) that we find critical for efficiently verifying large numbers of candidate invariants over programs with complex properties such as nonlinear arithmetic. However, we note that the programs and candidate invariants learned in this paper could serve as a benchmark suite for the evaluation of such theorem provers (i.e., hundreds of valid and invalid formulas involving nonlinear arithmetic, many of which are @math -inductive).
{ "cite_N": [ "@cite_30", "@cite_19", "@cite_40", "@cite_22" ], "mid": [ "1495266209", "1889756448", "1569137956", "" ], "abstract": [ "We take a fresh look at the problem of how to check safety properties of finite state machines. We are particularly interested in checking safety properties with the help of a SAT-solver. We describe some novel induction-based methods, and show how they are related to more standard fixpoint algorithms for invariance checking. We also present preliminary experimental results in the verification of FPGA cores. This demonstrates the practicality of combining a SAT-solver with induction for safety property checking of hardware in a real design flow.", "We present combined-case k-induction, a novel technique for verifying software programs. This technique draws on the strengths of the classical inductive-invariant method and a recent application of k-induction to program verification. In previous work, correctness of programs was established by separately proving a base case and inductive step. We present a new k-induction rule that takes an unstructured, reducible control flow graph (CFG), a natural loop occurring in the CFG, and a positive integer k, and constructs a single CFG in which the given loop is eliminated via an unwinding proportional to k. Recursively applying the proof rule eventually yields a loop-free CFG, which can be checked using SAT- SMT-based techniques. We state soundness of the rule, and investigate its theoretical properties. We then present two implementations of our technique: K-INDUCTOR, a verifier for C programs built on top of the CBMC model checker, and K-BOOGIE, an extension of the Boogie tool. Our experiments, using a large set of benchmarks, demonstrate that our k-induction technique frequently allows program verification to succeed using significantly weaker loop invariants than are required with the standard inductive invariant approach.", "We present a general scheme for automated instantiation-based invariant discovery. Given a transition system, the scheme produces k-inductive invariants from templates representing decidable predicates over the system's data types. The proposed scheme relies on efficient reasoning engines such as SAT and SMT solvers, and capitalizes on their ability to quickly generate counter-models of non-invariant conjectures. We discuss in detail two practical specializations of the general scheme in which templates represent partial orders. Our experimental results show that both specializations are able to quickly produce invariants from a variety of synchronous systems which prove quite useful in proving safety properties for these systems.", "" ] }
1904.07396
2940805846
Deep convolutional neural networks perform better on images containing spatially invariant noise (synthetic noise); however, their performance is limited on real-noisy photographs and requires multiple stage network modeling. To advance the practicability of denoising algorithms, this paper proposes a novel single-stage blind real image denoising network (RIDNet) by employing a modular architecture. We use a residual on the residual structure to ease the flow of low-frequency information and apply feature attention to exploit the channel dependencies. Furthermore, the evaluation in terms of quantitative metrics and visual quality on three synthetic and four real noisy datasets against 19 state-of-the-art algorithms demonstrate the superiority of our RIDNet.
In this section, we present and discuss recent trends in the image denoising. Two notable denoising algorithms, NLM @cite_56 and BM3D @cite_62 , use self-similar patches. Due to theer sucess, many variants were proposed, including SADCT @cite_36 , SAPCA @cite_52 , NLB @cite_7 , and INLM @cite_60 which seek self-similar patches in different transform domains. Dictionary-based methods @cite_31 @cite_8 @cite_22 enforce sparsity by employing self-similar patches and learning over-complete dictionaries from clean images. Many algorithms @cite_54 @cite_15 @cite_32 investigated the maximum likelihood algorithm to learn a statistical prior, the Gaussian Mixture Model of natural patches or patch groups for patch restoration. Furthermore, Levin al @cite_51 and Chatterjee al @cite_43 , motivated external denoising @cite_55 @cite_26 @cite_61 @cite_27 by showing that an image can be recovered with negligible error by selecting reference patches from a clean external database. However, all of the external algorithms are class-specific.
{ "cite_N": [ "@cite_61", "@cite_62", "@cite_26", "@cite_22", "@cite_7", "@cite_60", "@cite_36", "@cite_8", "@cite_54", "@cite_15", "@cite_55", "@cite_52", "@cite_32", "@cite_56", "@cite_43", "@cite_27", "@cite_31", "@cite_51" ], "mid": [ "2122911643", "2056370875", "", "2045079989", "2058005980", "2317084172", "2135065661", "2536599074", "2172275395", "", "", "1485262465", "", "2097073572", "", "2203654268", "2117787067", "2037133587" ], "abstract": [ "We propose a data-dependent denoising procedure to restore noisy images. Different from existing denoising algorithms which search for patches from either the noisy image or a generic database, the new algorithm finds patches from a database that contains relevant patches. We formulate the denoising problem as an optimal filter design problem and make two contributions. First, we determine the basis function of the denoising filter by solving a group sparsity minimization problem. The optimization formulation generalizes existing denoising algorithms and offers systematic analysis of the performance. Improvement methods are proposed to enhance the patch search process. Second, we determine the spectral coefficients of the denoising filter by considering a localized Bayesian prior. The localized prior leverages the similarity of the targeted database, alleviates the intensive Bayesian computation, and links the new method to the classical linear minimum mean squared error estimation. We demonstrate applications of the proposed method in a variety of scenarios, including text images, multiview images, and face images. Experimental results show the superiority of the new algorithm over existing methods.", "We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.", "", "Where does the sparsity in image signals come from? Local and nonlocal image models have supplied complementary views toward the regularity in natural images — the former attempts to construct or learn a dictionary of basis functions that promotes the sparsity; while the latter connects the sparsity with the self-similarity of the image source by clustering. In this paper, we present a variational framework for unifying the above two views and propose a new denoising algorithm built upon clustering-based sparse representation (CSR). Inspired by the success of l 1 -optimization, we have formulated a double-header l 1 -optimization problem where the regularization involves both dictionary learning and structural structuring. A surrogate-function based iterative shrinkage solution has been developed to solve the double-header l 1 -optimization problem and a probabilistic interpretation of CSR model is also included. Our experimental results have shown convincing improvements over state-of-the-art denoising technique BM3D on the class of regular texture images. The PSNR performance of CSR denoising is at least comparable and often superior to other competing schemes including BM3D on a collection of 12 generic natural images.", "Recent state-of-the-art image denoising methods use nonparametric estimation processes for @math patches and obtain surprisingly good denoising results. The mathematical and experimental evidence of two recent articles suggests that we might even be close to the best attainable performance in image denoising ever. This suspicion is supported by a remarkable convergence of all analyzed methods. Still more interestingly, most patch-based image denoising methods can be summarized in one paradigm, which unites the transform thresholding method and a Markovian Bayesian estimation. As the present paper shows, this unification is complete when the patch space is assumed to be a Gaussian mixture. Each Gaussian distribution is associated with its orthonormal basis of patch eigenvectors. Thus, transform thresholding (or a Wiener filter) is made on these local orthogonal bases. In this paper a simple patch-based Bayesian method is proposed, which on the one hand keeps most interesting features of former metho...", "Recently, the NLMeans filter has been proposed by for the suppression of white Gaussian noise. This filter exploits the repetitive character of structures in an image, unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Even though the method is quite intuitive and potentially very powerful, the PSNR and visual results are somewhat inferior to other recent state-of-the-art non-local algorithms, like KSVD and BM-3D. In this paper, we show that the NLMeans algorithm is basically the first iteration of the Jacobi optimization algorithm for robustly estimating the noise-free image. Based on this insight, we present additional improvements to the NLMeans algorithm and also an extension to noise reduction of coloured (correlated) noise. For white noise, PSNR results show that the proposed method is very competitive with the BM-3D method, while the visual quality of our method is better due to the lower presence of artifacts. For correlated noise on the other hand, we obtain a significant improvement in denoising performance compared to recent wavelet-based techniques.", "The shape-adaptive discrete cosine transform (SA-DCT) transform can be computed on a support of arbitrary shape, but retains a computational complexity comparable to that of the usual separable block-DCT (B-DCT). Despite the near-optimal decorrelation and energy compaction properties, application of the SA-DCT has been rather limited, targeted nearly exclusively to video compression. In this paper, we present a novel approach to image filtering based on the SA-DCT. We use the SA-DCT in conjunction with the Anisotropic Local Polynomial Approximation-Intersection of Confidence Intervals technique, which defines the shape of the transform's support in a pointwise adaptive manner. The thresholded or attenuated SA-DCT coefficients are used to reconstruct a local estimate of the signal within the adaptive-shape support. Since supports corresponding to different points are in general overlapping, the local estimates are averaged together using adaptive weights that depend on the region's statistics. This approach can be used for various image-processing tasks. In this paper, we consider, in particular, image denoising and image deblocking and deringing from block-DCT compression. A special structural constraint in luminance-chrominance space is also proposed to enable an accurate filtering of color images. Simulation experiments show a state-of-the-art quality of the final estimate, both in terms of objective criteria and visual appearance. Thanks to the adaptive support, reconstructed edges are clean, and no unpleasant ringing artifacts are introduced by the fitted transform", "We propose in this paper to unify two different approaches to image restoration: On the one hand, learning a basis set (dictionary) adapted to sparse signal descriptions has proven to be very effective in image reconstruction and classification tasks. On the other hand, explicitly exploiting the self-similarities of natural images has led to the successful non-local means approach to image restoration. We propose simultaneous sparse coding as a framework for combining these two approaches in a natural manner. This is achieved by jointly decomposing groups of similar signals on subsets of the learned dictionary. Experimental results in image denoising and demosaicking tasks with synthetic and real noise show that the proposed method outperforms the state of the art, making it possible to effectively restore raw images from digital cameras at a reasonable speed and memory cost.", "Learning good image priors is of utmost importance for the study of vision, computer vision and image processing applications. Learning priors and optimizing over whole images can lead to tremendous computational challenges. In contrast, when we work with small image patches, it is possible to learn priors and perform patch restoration very efficiently. This raises three questions - do priors that give high likelihood to the data also lead to good performance in restoration? Can we use such patch based priors to restore a full image? Can we learn better patch priors? In this work we answer these questions. We compare the likelihood of several patch models and show that priors that give high likelihood to data perform better in patch restoration. Motivated by this result, we propose a generic framework which allows for whole image restoration using any patch based prior for which a MAP (or approximate MAP) estimate can be calculated. We show how to derive an appropriate cost function, how to optimize it and how to use it to restore whole images. Finally, we present a generic, surprisingly simple Gaussian Mixture prior, learned from a set of natural images. When used with the proposed framework, this Gaussian Mixture Model outperforms all other generic prior methods for image denoising, deblurring and inpainting.", "", "", "We propose an image denoising method that ex- ploits nonlocal image modeling, principal component analysis (PCA), and local shape-adaptive anisotropic estimation. The nonlocal modeling is exploited by grouping similar image patches in 3-D groups. The denoising is performed by shrinkage of the spectrum of a 3-D transform applied on such groups. The effectiveness of the shrinkage depends on the ability of the transform to sparsely represent the true-image data, thus separating it from the noise. We propose to improve the sparsity in two aspects. First, we employ image patches (neighborhoods) which can have data-adaptive shape. Second, we propose PCA on these adaptive-shape neighborhoods as part of the employed 3-D transform. The PCA bases are obtained by eigenvalue decompo- sition of empirical second-moment matrices that are estimated from groups of similar adaptive-shape neighborhoods. We show that the proposed method is competitive and outperforms some of the current best denoising methods, especially in preserving image details and introducing very few artifacts.", "", "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.", "", "Natural image modeling plays a key role in many vision problems such as image denoising. Image priors are widely used to regularize the denoising process, which is an illposed inverse problem. One category of denoising methods exploit the priors (e.g., TV, sparsity) learned from external clean images to reconstruct the given noisy image, while another category of methods exploit the internal prior (e.g., self-similarity) to reconstruct the latent image. Though the internal prior based methods have achieved impressive denoising results, the improvement of visual quality will become very difficult with the increase of noise level. In this paper, we propose to exploit image external patch prior and internal self-similarity prior jointly, and develop an external patch prior guided internal clustering algorithm for image denoising. It is known that natural image patches form multiple subspaces. By utilizing Gaussian mixture models (GMMs) learning, image similar patches can be clustered and the subspaces can be learned. The learned GMMs from clean images are then used to guide the clustering of noisypatches of the input noisy images, followed by a low-rank approximation process to estimate the latent subspace for image recovery. Numerical experiments show that the proposed method outperforms many state-of-the-art denoising algorithms such as BM3D and WNNM.", "In super-resolution (SR) reconstruction of images, regularization becomes crucial when insufficient number of measured low-resolution images is supplied. Beyond making the problem algebraically well posed, a properly chosen regularization can direct the solution toward a better quality outcome. Even the extreme case—a SR reconstruction from a single measured image—can be made successful with a well-chosen regularization. Much of the progress made in the past two decades on inverse problems in image processing can be attributed to the advances in forming or choosing the way to practice the regularization. A Bayesian point of view interpret this as a way of including the prior distribution of images, which sheds some light on the complications involved. This paper reviews an emerging powerful family of regularization techniques that is drawing attention in recent years—the example-based approach. We describe how examples can and have been used effectively for regularization of inverse problems, reviewing the main contributions along these lines in the literature, and organizing this information into major trends and directions. A description of the state-of-the-art in this field, along with supporting simulation results on the image scale-up problem are given. This paper concludes with an outline of the outstanding challenges this field faces today.", "The goal of natural image denoising is to estimate a clean version of a given noisy image, utilizing prior knowledge on the statistics of natural images. The problem has been studied intensively with considerable progress made in recent years. However, it seems that image denoising algorithms are starting to converge and recent algorithms improve over previous ones by only fractional dB values. It is thus important to understand how much more can we still improve natural image denoising algorithms and what are the inherent limits imposed by the actual statistics of the data. The challenge in evaluating such limits is that constructing proper models of natural image statistics is a long standing and yet unsolved problem. To overcome the absence of accurate image priors, this paper takes a non parametric approach and represents the distribution of natural images using a huge set of 1010 patches. We then derive a simple statistical measure which provides a lower bound on the optimal Bayesian minimum mean square error (MMSE). This imposes a limit on the best possible results of denoising algorithms which utilize a fixed support around a denoised pixel and a generic natural image prior. Our findings suggest that for small windows, state of the art denoising algorithms are approaching optimality and cannot be further improved beyond ∼ 0.1dB values." ] }
1904.07396
2940805846
Deep convolutional neural networks perform better on images containing spatially invariant noise (synthetic noise); however, their performance is limited on real-noisy photographs and requires multiple stage network modeling. To advance the practicability of denoising algorithms, this paper proposes a novel single-stage blind real image denoising network (RIDNet) by employing a modular architecture. We use a residual on the residual structure to ease the flow of low-frequency information and apply feature attention to exploit the channel dependencies. Furthermore, the evaluation in terms of quantitative metrics and visual quality on three synthetic and four real noisy datasets against 19 state-of-the-art algorithms demonstrate the superiority of our RIDNet.
Currently, due to the popularity of convolutional neural networks (CNNs), image denoising algorithms @cite_10 @cite_63 @cite_33 @cite_28 @cite_53 @cite_44 have achieved a performance boost. Notable denoising neural networks, DnCNN @cite_10 , and IrCNN @cite_63 predict the residue present in the image instead of the denoised image as the input to the loss function is ground truth noise as compared to the original clean image. Both networks achieved better results despite having a simple architecture where repeated blocks of convolutional, batch normalization and ReLU activations are used. Furthermore, IrCNN @cite_63 and DnCNN @cite_10 are dependent on blindly predicted noise without taking into account the underlying structures and textures of the noisy image.
{ "cite_N": [ "@cite_33", "@cite_28", "@cite_53", "@cite_44", "@cite_63", "@cite_10" ], "mid": [ "2550616896", "", "", "2771208044", "2613155248", "2508457857" ], "abstract": [ "We propose a novel deep network architecture for grayscale and color image denoising that is based on a non-local image model. Our motivation for the overall design of the proposed network stems from variational methods that exploit the inherent non-local self-similarity property of natural images. We build on this concept and introduce deep networks that perform non-local processing and at the same time they significantly benefit from discriminative learning. Experiments on the Berkeley segmentation dataset, comparing several state-of-the-art methods, show that the proposed non-local models achieve the best reported denoising performance both for grayscale and color images for all the tested noise levels. It is also worth noting that this increase in performance comes at no extra cost on the capacity of the network compared to existing alternative deep network architectures. In addition, we highlight a direct link of the proposed non-local models to convolutional neural networks. This connection is of significant importance since it allows our models to take full advantage of the latest advances on GPU computing in deep learning and makes them amenable to efficient implementations through their inherent parallelism.", "", "", "We propose to learn a fully-convolutional network model that consists of a Chain of Identity Mapping Modules (CIMM) for image denoising. The CIMM structure possesses two distinctive features that are important for the noise removal task. Firstly, each residual unit employs identity mappings as the skip connections and receives pre-activated input in order to preserve the gradient magnitude propagated in both the forward and backward directions. Secondly, by utilizing dilated kernels for the convolution layers in the residual branch, in other words within an identity mapping module, each neuron in the last convolution layer can observe the full receptive field of the first layer. After being trained on the BSD400 dataset, the proposed network produces remarkably higher numerical accuracy and better visual image quality than the state-of-the-art when being evaluated on conventional benchmark images and the BSD68 dataset.", "Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance, in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers can not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications.", "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing." ] }
1904.07396
2940805846
Deep convolutional neural networks perform better on images containing spatially invariant noise (synthetic noise); however, their performance is limited on real-noisy photographs and requires multiple stage network modeling. To advance the practicability of denoising algorithms, this paper proposes a novel single-stage blind real image denoising network (RIDNet) by employing a modular architecture. We use a residual on the residual structure to ease the flow of low-frequency information and apply feature attention to exploit the channel dependencies. Furthermore, the evaluation in terms of quantitative metrics and visual quality on three synthetic and four real noisy datasets against 19 state-of-the-art algorithms demonstrate the superiority of our RIDNet.
Another essential image restoration framework is Trainable Nonlinear Reaction-Diffusion (TRND) @cite_1 which uses a field-of-experts prior @cite_37 into the deep neural network for a specific number of inference steps by extending the non-linear diffusion paradigm into a profoundly trainable parametrized linear filters and the influence functions. Although the results of TRND are favorable, the model requires a significant amount of data to learn the parameters and influence functions as well as overall fine-tuning, hyper-parameter determination, and stage-wise training. Similarly, non-local color net (NLNet) @cite_33 was motivated by non-local self-similar (NSS) priors which employ non-local self-similarity coupled with discriminative learning. NLNet improved upon the traditional methods; but, it lags in performance compared to most of the CNNs @cite_63 @cite_10 due to the adaptaton of NSS priors, as it is unable to find the analogs for all the patches in the image.
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_1", "@cite_63", "@cite_10" ], "mid": [ "", "2550616896", "1906770428", "2613155248", "2508457857" ], "abstract": [ "", "We propose a novel deep network architecture for grayscale and color image denoising that is based on a non-local image model. Our motivation for the overall design of the proposed network stems from variational methods that exploit the inherent non-local self-similarity property of natural images. We build on this concept and introduce deep networks that perform non-local processing and at the same time they significantly benefit from discriminative learning. Experiments on the Berkeley segmentation dataset, comparing several state-of-the-art methods, show that the proposed non-local models achieve the best reported denoising performance both for grayscale and color images for all the tested noise levels. It is also worth noting that this increase in performance comes at no extra cost on the capacity of the network compared to existing alternative deep network architectures. In addition, we highlight a direct link of the proposed non-local models to convolutional neural networks. This connection is of significant importance since it allows our models to take full advantage of the latest advances on GPU computing in deep learning and makes them amenable to efficient implementations through their inherent parallelism.", "Image restoration is a long-standing problem in low-level computer vision with many interesting applications. We describe a flexible learning framework based on the concept of nonlinear reaction diffusion models for various image restoration problems. By embodying recent improvements in nonlinear diffusion models, we propose a dynamic nonlinear reaction diffusion model with time-dependent parameters ( i.e. , linear filters and influence functions). In contrast to previous nonlinear diffusion models, all the parameters, including the filters and the influence functions, are simultaneously learned from training data through a loss based approach. We call this approach TNRD— Trainable Nonlinear Reaction Diffusion . The TNRD approach is applicable for a variety of image restoration tasks by incorporating appropriate reaction force. We demonstrate its capabilities with three representative applications, Gaussian image denoising, single image super resolution and JPEG deblocking. Experiments show that our trained nonlinear diffusion models largely benefit from the training of the parameters and finally lead to the best reported performance on common test datasets for the tested applications. Our trained models preserve the structural simplicity of diffusion models and take only a small number of diffusion steps, thus are highly efficient. Moreover, they are also well-suited for parallel computation on GPUs, which makes the inference procedure extremely fast.", "Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance, in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers can not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications.", "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing." ] }
1904.07396
2940805846
Deep convolutional neural networks perform better on images containing spatially invariant noise (synthetic noise); however, their performance is limited on real-noisy photographs and requires multiple stage network modeling. To advance the practicability of denoising algorithms, this paper proposes a novel single-stage blind real image denoising network (RIDNet) by employing a modular architecture. We use a residual on the residual structure to ease the flow of low-frequency information and apply feature attention to exploit the channel dependencies. Furthermore, the evaluation in terms of quantitative metrics and visual quality on three synthetic and four real noisy datasets against 19 state-of-the-art algorithms demonstrate the superiority of our RIDNet.
Building upon the success of DnCNN @cite_10 , Jiao al proposed a network consisting of two stacked subnets, named FormattingNet'' and DiffResNet'' respectively. The architecture of both networks is similar, and the difference lies in the loss layers used. The first subnet employs total variational and perceptual loss while the second one uses @math loss. The overall model is named as FormResNet and improves upon @cite_63 @cite_10 by a small margin. Lately, Bae al @cite_29 employed persistent homology analysis @cite_16 via wavelet transformed domain to learn the features in CNN denoising. The performance of the model is marginally better compared to @cite_10 @cite_23 , which can be attributed to a large number of feature maps employed rather than the model itself. Recently, Anwar al introduced CIMM, a deep denoising CNN architecture, composed of identity mapping modules @cite_44 . The network learns features in cascaded identity modules using dilated kernels and uses self-ensemble to boost performance. CIMM improved upon all the previous CNN models @cite_10 @cite_23 .
{ "cite_N": [ "@cite_29", "@cite_44", "@cite_23", "@cite_63", "@cite_16", "@cite_10" ], "mid": [ "2551161082", "2771208044", "2740494144", "2613155248", "", "2508457857" ], "abstract": [ "The latest deep learning approaches perform better than the state-of-the-art signal processing approaches in various image restoration tasks. However, if an image contains many patterns and structures, the performance of these CNNs is still inferior. To address this issue, here we propose a novel feature space deep residual learning algorithm that outperforms the existing residual learning. The main idea is originated from the observation that the performance of a learning algorithm can be improved if the input and or label manifolds can be made topologically simpler by an analytic mapping to a feature space. Our extensive numerical studies using denoising experiments and NTIRE single-image super-resolution (SISR) competition demonstrate that the proposed feature space residual learning outperforms the existing state-of-the-art approaches. Moreover, our algorithm was ranked high in the NTIRE competition with 5-10 times faster computational time compared to the top ranked teams.", "We propose to learn a fully-convolutional network model that consists of a Chain of Identity Mapping Modules (CIMM) for image denoising. The CIMM structure possesses two distinctive features that are important for the noise removal task. Firstly, each residual unit employs identity mappings as the skip connections and receives pre-activated input in order to preserve the gradient magnitude propagated in both the forward and backward directions. Secondly, by utilizing dilated kernels for the convolution layers in the residual branch, in other words within an identity mapping module, each neuron in the last convolution layer can observe the full receptive field of the first layer. After being trained on the BSD400 dataset, the proposed network produces remarkably higher numerical accuracy and better visual image quality than the state-of-the-art when being evaluated on conventional benchmark images and the BSD68 dataset.", "In this paper, we propose a deep CNN to tackle the image restoration problem by learning the structured residual. Previous deep learning based methods directly learn the mapping from corrupted images to clean images, and may suffer from the gradient exploding vanishing problems of deep neural networks. We propose to address the image restoration problem by learning the structured details and recovering the latent clean image together, from the shared information between the corrupted image and the latent image. In addition, instead of learning the pure difference (corruption), we propose to add a \"residual formatting layer\" to format the residual to structured information, which allows the network to converge faster and boosts the performance. Furthermore, we propose a cross-level loss net to ensure both pixel-level accuracy and semantic-level visual quality. Evaluations on public datasets show that the proposed method outperforms existing approaches quantitatively and qualitatively.", "Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance, in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers can not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications.", "", "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing." ] }
1904.07396
2940805846
Deep convolutional neural networks perform better on images containing spatially invariant noise (synthetic noise); however, their performance is limited on real-noisy photographs and requires multiple stage network modeling. To advance the practicability of denoising algorithms, this paper proposes a novel single-stage blind real image denoising network (RIDNet) by employing a modular architecture. We use a residual on the residual structure to ease the flow of low-frequency information and apply feature attention to exploit the channel dependencies. Furthermore, the evaluation in terms of quantitative metrics and visual quality on three synthetic and four real noisy datasets against 19 state-of-the-art algorithms demonstrate the superiority of our RIDNet.
Recently, many algorithms focused on blind denoising on real-noisy images. The algorithms @cite_63 @cite_10 @cite_23 benefitted from the modeling capacity of CNNs and have shown the ability to learn a single-blind denoising model; however, the denoising performance is limited, and the results are not satisfactory on real photographs. Generally speaking, real-noisy image denoising is a two-step process: the first involves noise estimation while the second addresses non-blind denoising. Noise clinic (NC) @cite_47 estimates the noise model dependent on signal and frequency followed by denoising the image using non-local Bayes (NLB). In comparison, Zhang al @cite_45 proposed a non-blind Gaussian denoising network, termed FFDNet that can produce satisfying results on some of the real noisy images; however, it requires manual intervention to select high noise-level.
{ "cite_N": [ "@cite_45", "@cite_23", "@cite_63", "@cite_47", "@cite_10" ], "mid": [ "2764207251", "2740494144", "2613155248", "1993205988", "2508457857" ], "abstract": [ "Due to the fast inference and good performance, discriminative learning methods have been widely studied in image denoising. However, these methods mostly learn a specific model for each noise level, and require multiple models for denoising images with different noise levels. They also lack flexibility to deal with spatially variant noise, limiting their applications in practical denoising. To address these issues, we present a fast and flexible denoising convolutional neural network, namely FFDNet, with a tunable noise level map as the input. The proposed FFDNet works on downsampled sub-images, achieving a good trade-off between inference speed and denoising performance. In contrast to the existing discriminative denoisers, FFDNet enjoys several desirable properties, including: 1) the ability to handle a wide range of noise levels (i.e., [0, 75]) effectively with a single network; 2) the ability to remove spatially variant noise by specifying a non-uniform noise level map; and 3) faster speed than benchmark BM3D even on CPU without sacrificing denoising performance. Extensive experiments on synthetic and real noisy images are conducted to evaluate FFDNet in comparison with state-of-the-art denoisers. The results show that FFDNet is effective and efficient, making it highly attractive for practical denoising applications.", "In this paper, we propose a deep CNN to tackle the image restoration problem by learning the structured residual. Previous deep learning based methods directly learn the mapping from corrupted images to clean images, and may suffer from the gradient exploding vanishing problems of deep neural networks. We propose to address the image restoration problem by learning the structured details and recovering the latent clean image together, from the shared information between the corrupted image and the latent image. In addition, instead of learning the pure difference (corruption), we propose to add a \"residual formatting layer\" to format the residual to structured information, which allows the network to converge faster and boosts the performance. Furthermore, we propose a cross-level loss net to ensure both pixel-level accuracy and semantic-level visual quality. Evaluations on public datasets show that the proposed method outperforms existing approaches quantitatively and qualitatively.", "Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance, in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers can not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications.", "", "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing." ] }
1904.07396
2940805846
Deep convolutional neural networks perform better on images containing spatially invariant noise (synthetic noise); however, their performance is limited on real-noisy photographs and requires multiple stage network modeling. To advance the practicability of denoising algorithms, this paper proposes a novel single-stage blind real image denoising network (RIDNet) by employing a modular architecture. We use a residual on the residual structure to ease the flow of low-frequency information and apply feature attention to exploit the channel dependencies. Furthermore, the evaluation in terms of quantitative metrics and visual quality on three synthetic and four real noisy datasets against 19 state-of-the-art algorithms demonstrate the superiority of our RIDNet.
Very recently, CBDNet @cite_2 trains a blind denoising model for real photographs. CBDNet @cite_2 is composed of two subnetworks: noise estimation and non-blind denoising. CBDNet @cite_2 also incorporated multiple losses, is engineered to train on real-synthetic noise and real-image noise and enforces a higher noise standard deviation for low noise images. Furthermore, @cite_2 @cite_45 may require manual intervention to improve results. On the other hand, we present an end-to-end architecture that learns the noise and produces results on real noisy images without requiring separate subnets or manual intervention.
{ "cite_N": [ "@cite_45", "@cite_2" ], "mid": [ "2764207251", "2832157980" ], "abstract": [ "Due to the fast inference and good performance, discriminative learning methods have been widely studied in image denoising. However, these methods mostly learn a specific model for each noise level, and require multiple models for denoising images with different noise levels. They also lack flexibility to deal with spatially variant noise, limiting their applications in practical denoising. To address these issues, we present a fast and flexible denoising convolutional neural network, namely FFDNet, with a tunable noise level map as the input. The proposed FFDNet works on downsampled sub-images, achieving a good trade-off between inference speed and denoising performance. In contrast to the existing discriminative denoisers, FFDNet enjoys several desirable properties, including: 1) the ability to handle a wide range of noise levels (i.e., [0, 75]) effectively with a single network; 2) the ability to remove spatially variant noise by specifying a non-uniform noise level map; and 3) faster speed than benchmark BM3D even on CPU without sacrificing denoising performance. Extensive experiments on synthetic and real noisy images are conducted to evaluate FFDNet in comparison with state-of-the-art denoisers. The results show that FFDNet is effective and efficient, making it highly attractive for practical denoising applications.", "While deep convolutional neural networks (CNNs) have achieved impressive success in image denoising with additive white Gaussian noise (AWGN), their performance remains limited on real-world noisy photographs. The main reason is that their learned models are easy to overfit on the simplified AWGN model which deviates severely from the complicated real-world noise model. In order to improve the generalization ability of deep CNN denoisers, we suggest training a convolutional blind denoising network (CBDNet) with more realistic noise model and real-world noisy-clean image pairs. On the one hand, both signal-dependent noise and in-camera signal processing pipeline is considered to synthesize realistic noisy images. On the other hand, real-world noisy photographs and their nearly noise-free counterparts are also included to train our CBDNet. To further provide an interactive strategy to rectify denoising result conveniently, a noise estimation subnetwork with asymmetric learning to suppress under-estimation of noise level is embedded into CBDNet. Extensive experimental results on three datasets of real-world noisy photographs clearly demonstrate the superior performance of CBDNet over state-of-the-arts in terms of quantitative metrics and visual quality. The code has been made available at this https URL." ] }
1904.07664
2936912582
The LOCAL model is among the main models for studying locality in the framework of distributed network computing. This model is however subject to pertinent criticisms, including the facts that all nodes wake up simultaneously, perform in lock steps, and are failure-free. We show that relaxing these hypotheses to some extent does not hurt local computing. In particular, we show that, for any construction task @math associated to a locally checkable labeling (LCL), if @math is solvable in @math rounds in the LOCAL model, then @math remains solvable in @math rounds in the asynchronous LOCAL model. This improves the result by [SSS 2016], which was restricted to 3-coloring the rings. More generally, the main contribution of this paper is to show that, perhaps surprisingly, asynchrony and failures in the computations do not restrict the power of the LOCAL model, as long as the communications remain synchronous and failure-free.
In addition to the aforementioned related work, it is worth mentioning several variants of the model previously investigated in the literature @cite_18 @cite_0 @cite_15 @cite_4 , among which the and the models play an important role.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_4", "@cite_15" ], "mid": [ "2552279664", "2886624408", "2610932619", "2487364517" ], "abstract": [ "This paper is centered on the complexity of graph problems in the well-studied LOCAL model of distributed computing, introduced by Linial [FOCS '87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and (Δ+1)-vertex coloring), the randomized complexity is at most polylogarithmic in the size n of the network, while the best deterministic complexity is typically 2O(√logn). Understanding and potentially narrowing down this exponential gap is considered to be one of the central long-standing open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexity-theoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in logn rounds in the LOCAL model, we can deterministically solve all efficient SLOCAL-problems (including MIS and (Δ+1)-coloring) in logn rounds in the LOCAL model. Perhaps most surprisingly, we show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zero-round randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values. In addition, our formal framework also allows us to develop polylogarithmic-time randomized distributed algorithms in a simpler way. As a result, we provide a polylog-time distributed approximation scheme for arbitrary distributed covering and packing integer linear programs.", "Abstract We consider two models of computation: centralized local algorithms and local distributed algorithms. Algorithms in one model are adapted to the other model to obtain improved algorithms. Distributed vertex coloring is employed to design improved centralized local algorithms for: maximal independent set, maximal matching, and an approximation scheme for maximum (weighted) matching over bounded degree graphs. The improvement is threefold: the algorithms are deterministic, stateless, and the number of probes grows polynomially in log ⁎ ⁡ n , where n is the number of vertices of the input graph. The recursive centralized local improvement technique by Nguyen and Onak (FOCS 2008) is employed to obtain a distributed approximation scheme for maximum (weighted) matching.", "This work presents a classification of weak models of distributed computing. We focus on deterministic distributed algorithms, and study models of computing that are weaker versions of the widely-studied port-numbering model. In the port-numbering model, a node of degree @math d receives messages through @math d input ports and sends messages through @math d output ports, both numbered with @math 1 , 2 , ? , d . In this work, @math VV c is the class of all graph problems that can be solved in the standard port-numbering model. We study the following subclasses of @math VV c : Now we have many trivial containment relations, such as @math SB ⊆ MB ⊆ VB ⊆ VV ⊆ VV c , but it is not obvious if, for example, either of @math VB ⊆ SV or @math SV ⊆ VB should hold. Nevertheless, it turns out that we can identify a linear order on these classes. We prove that @math SB ? MB = VB ? SV = MV = VV ? VV c . The same holds for the constant-time versions of these classes. We also show that the constant-time variants of these classes can be characterised by a corresponding modal logic. Hence the linear order identified in this work has direct implications in the study of the expressibility of modal logic. Conversely, one can use tools from modal logic to study these classes.", "We show an ( ( ^ 1 3 - 3 ) ) lower bound on the runtime of any deterministic distributed ( O ( ^ 1+ ) )-graph coloring algorithm in a weak variant of the ( LOCAL ) model." ] }
1904.07664
2936912582
The LOCAL model is among the main models for studying locality in the framework of distributed network computing. This model is however subject to pertinent criticisms, including the facts that all nodes wake up simultaneously, perform in lock steps, and are failure-free. We show that relaxing these hypotheses to some extent does not hurt local computing. In particular, we show that, for any construction task @math associated to a locally checkable labeling (LCL), if @math is solvable in @math rounds in the LOCAL model, then @math remains solvable in @math rounds in the asynchronous LOCAL model. This improves the result by [SSS 2016], which was restricted to 3-coloring the rings. More generally, the main contribution of this paper is to show that, perhaps surprisingly, asynchrony and failures in the computations do not restrict the power of the LOCAL model, as long as the communications remain synchronous and failure-free.
In the model @cite_18 , the nodes are queried in arbitrary order, and a centralized (randomized) algorithm must answer each query by returning the status of each queried node in some (unknown) global solution. For instance, in the case of maximal independent set, must answer whether the queried node belongs the independent set, or not. When a node is queried, can probe nodes in the vicinity of the queried node for forging its answer. It is requested that the number of probed nodes should be sublinear in the size @math of the network. More importantly, is also itself subject to severe restriction, and must run with a memory of sublinear size. Yet, must answer correctly to the queries, w.h.p., that is, all answers must be consistent w.r.t. the task to be solved. The model obviously differs from the model in many aspects, including the fact that, when a node is probed, it reveals its identity, whilst, in the model, only awaken nodes reveal their identities.
{ "cite_N": [ "@cite_18" ], "mid": [ "2886624408" ], "abstract": [ "Abstract We consider two models of computation: centralized local algorithms and local distributed algorithms. Algorithms in one model are adapted to the other model to obtain improved algorithms. Distributed vertex coloring is employed to design improved centralized local algorithms for: maximal independent set, maximal matching, and an approximation scheme for maximum (weighted) matching over bounded degree graphs. The improvement is threefold: the algorithms are deterministic, stateless, and the number of probes grows polynomially in log ⁎ ⁡ n , where n is the number of vertices of the input graph. The recursive centralized local improvement technique by Nguyen and Onak (FOCS 2008) is employed to obtain a distributed approximation scheme for maximum (weighted) matching." ] }
1904.07426
2940183409
Object instance segmentation is one of the most fundamental but challenging tasks in computer vision, and it requires the pixel-level image understanding. Most existing approaches address this problem by adding a mask prediction branch to a two-stage object detector with the Region Proposal Network (RPN). Although producing good segmentation results, the efficiency of these two-stage approaches is far from satisfactory, restricting their applicability in practice. In this paper, we propose a one-stage framework, SPRNet, which performs efficient instance segmentation by introducing a single pixel reconstruction (SPR) branch to off-the-shelf one-stage detectors. The added SPR branch reconstructs the pixel-level mask from every single pixel in the convolution feature map directly. Using the same ResNet-50 backbone, SPRNet achieves comparable mask AP to Mask R-CNN at a higher inference speed, and gains all-round improvements on box AP at every scale comparing with RetinaNet.
Prevalent object detection approaches can be categorized into two classes: the two-stage framework based on region proposals @cite_20 @cite_1 , or the one-stage framework based on convolutional feature maps @cite_2 @cite_19 @cite_23 . As pioneered in the R-CNN work @cite_1 , the first stage generated a set of candidate region proposals to recall as much objects as possible, and the second stage use a deep networks to classify the proposals. Furthermore, R-CNN successfully underpinned many follow-up improvements like Faster R-CNN @cite_1 and R-FCN @cite_24 , which are the current leading frameworks for object detection. An inevitable procedure of the two-stage methods is the per-proposal prediction. When a large number of proposals are utilized, this became a speed bottleneck of these methods. In contrast, the one-stage methods do not introduce the region proposals thus obtained a significant improvement on speed. OverFeat @cite_12 was one of the first modern one-stage object detector based on deep networks. SSD @cite_19 and YOLO @cite_2 are carefully designed for high speed at the expense of lower accuracy. Recently, DSSD @cite_10 and RetinaNet @cite_23 have renewed interest in one-stage methods, achieving impressive accuracy that rivals that of two-stage detectors, and also running at much higher speeds.
{ "cite_N": [ "@cite_1", "@cite_24", "@cite_19", "@cite_23", "@cite_2", "@cite_10", "@cite_12", "@cite_20" ], "mid": [ "2613718673", "2950800384", "2193145675", "2884561390", "2963037989", "2579985080", "1487583988", "" ], "abstract": [ "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets), for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: this https URL", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https: github.com facebookresearch Detectron.", "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.", "The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset.", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "" ] }
1904.07426
2940183409
Object instance segmentation is one of the most fundamental but challenging tasks in computer vision, and it requires the pixel-level image understanding. Most existing approaches address this problem by adding a mask prediction branch to a two-stage object detector with the Region Proposal Network (RPN). Although producing good segmentation results, the efficiency of these two-stage approaches is far from satisfactory, restricting their applicability in practice. In this paper, we propose a one-stage framework, SPRNet, which performs efficient instance segmentation by introducing a single pixel reconstruction (SPR) branch to off-the-shelf one-stage detectors. The added SPR branch reconstructs the pixel-level mask from every single pixel in the convolution feature map directly. Using the same ResNet-50 backbone, SPRNet achieves comparable mask AP to Mask R-CNN at a higher inference speed, and gains all-round improvements on box AP at every scale comparing with RetinaNet.
In both the detection and segmentation tasks, the problem of detecting small objects is an open problem. On the bottom layers of a deep networks, objects have rich spatial information but lacking of clear semantic; while on the top layers the situation turn around. To tackle this problem, Feature Pyramid Network (FPN) was proposed to aggregate multi-level features in a pyramidal hierarchy. Beyond the traditional pathway by the deep networks, (FPN) additionally established a paths to deliver the rich semantic information of top layers to the bottom layers @cite_14 . In their implementation, each top layer feature is 2 @math up-sampled and then merged with the bottom level feature using element-wise summation. By doing so, the semantic information of top layers can be passed to the bottom layers successfully. However, their inaccurate spatial information is also passed to the bottom layers at the same time, which the effectiveness to some extent.
{ "cite_N": [ "@cite_14" ], "mid": [ "2949533892" ], "abstract": [ "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available." ] }
1904.07723
2936237842
We present a principled method for motion prediction via dynamic simulation for rigid bodies in intermittent contact with each other where the contact is assumed to be a planar non-convex contact patch. The planar non-convex contact patch can either be a topologically connected set or disconnected set. Such algorithms are useful in planning and control for robotic manipulation. Most works in rigid body dynamic simulation assume that the contact between objects is a point contact, which may not be valid in many applications. In this paper, by using the convex hull of the contact patch, we build on our recent work on simulating rigid bodies with convex contact patches, for simulating the motion of objects with planar non-convex contact patches. We formulate a discrete-time mixed complementarity problem where we solve the contact detection and integration of the equations of motion simultaneously. Thus, our method is a geometrically-implicit method and we prove that in our formulation, there is no artificial penetration between the contacting rigid bodies. We solve for the equivalent contact point (ECP) and contact impulse of each contact patch simultaneously along with the state, i.e., configuration and velocity of the objects. We provide empirical evidence to show that our method can seamlessly capture the transition between different contact modes like patch contact to multiple or single point contact during the simulation.
In this section, we present the related work in rigid body dynamic simulation with a focus on methods for dealing with intermittent contact. There is also a substantial body of work on development of discretization schemes for integrating and simulating rigid body motion that we do not discuss here (please see the literature on variational integrators @cite_32 @cite_24 @cite_18 and references therein). We model the continuous time dynamics of rigid bodies that are in intermittent contact with each other as a Differential Complementarity Problem (DCP). Let @math , @math and let @math : @math , @math : @math be two vector functions and the notation @math imply that @math is orthogonal to @math and each component of the vectors is non-negative.
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_32" ], "mid": [ "2095782953", "1999726309", "2170039048" ], "abstract": [ "We present a technique to implement scalable variational integrators for generic mechanical systems in generalized coordinates. Systems are represented by a tree-based structure that provides efficient means to algorithmically calculate values (position, velocities, and derivatives) needed for variational integration without the need to resort to explicit equations of motion. The variational integrator handles closed kinematic chains, holonomic constraints, dissipation, and external forcing without modification. To avoid the full equations of motion, this method uses recursive equations, and caches calculated values, to scale to large systems by the use of generalized coordinates. An example of a closed-kinematic-chain system is included along with a comparison with the open-dynamics engine (ODE) to illustrate the scalability and desirable energetic properties of the technique. A second example demonstrates an application to an actuated mechanical system.", "This article is concerned with the animation and control of vehicles with complex dynamics such as helicopters, boats, and cars. Motivated by recent developments in discrete geometric mechanics, we develop a general framework for integrating the dynamics of holonomic and nonholonomic vehicles by preserving their state-space geometry and motion invariants. We demonstrate that the resulting integration schemes are superior to standard methods in numerical robustness and efficiency, and can be applied to many types of vehicles. In addition, we show how to use this framework in an optimal control setting to automatically compute accurate and realistic motions for arbitrary user-specified constraints.", "This paper gives a review of integration algorithms for finite dimensional mechanical systems that are based on discrete variational principles. The variational technique gives a unified treatment of many symplectic schemes, including those of higher order, as well as a natural treatment of the discrete Noether theorem. The approach also allows us to include forces, dissipation and constraints in a natural way. Amongst the many specific schemes treated as examples, the Verlet, SHAKE, RATTLE, Newmark, and the symplectic partitioned Runge–Kutta schemes are presented." ] }
1904.07612
2941715400
We present a method for audio denoising that combines processing done in both the time domain and the time-frequency domain. Given a noisy audio clip, the method trains a deep neural network to fit this signal. Since the fitting is only partly successful and is able to better capture the underlying clean signal than the noise, the output of the network helps to disentangle the clean audio from the rest of the signal. The method is completely unsupervised and only trains on the specific audio clip that is being denoised. Our experiments demonstrate favorable performance in comparison to the literature methods, and our code and audio samples are available at https: github.com mosheman5 DNP. Index Terms: Audio denoising; Unsupervised learning
The current state-of-the-art method, called Deep Feature Loss @cite_9 uses a context aggregation network, and instead of using the MSE loss between the output and the target, employs a perceptual loss function. The perceptual loss is derived from the deep layer activations of a network that is pre-trained for audio classification tasks.
{ "cite_N": [ "@cite_9" ], "mid": [ "2810843531" ], "abstract": [ "We present an end-to-end deep learning approach to denoising speech signals by processing the raw waveform directly. Given input audio containing speech corrupted by an additive background signal, the system aims to produce a processed signal that contains only the speech content. Recent approaches have shown promising results using various deep network architectures. In this paper, we propose to train a fully-convolutional context aggregation network using a deep feature loss. That loss is based on comparing the internal feature activations in a different network, trained for acoustic environment detection and domestic audio tagging. Our approach outperforms the state-of-the-art in objective speech quality metrics and in large-scale perceptual experiments with human listeners. It also outperforms an identical network trained using traditional regression losses. The advantage of the new approach is particularly pronounced for the hardest data with the most intrusive background noise, for which denoising is most needed and most challenging." ] }
1904.07612
2941715400
We present a method for audio denoising that combines processing done in both the time domain and the time-frequency domain. Given a noisy audio clip, the method trains a deep neural network to fit this signal. Since the fitting is only partly successful and is able to better capture the underlying clean signal than the noise, the output of the network helps to disentangle the clean audio from the rest of the signal. The method is completely unsupervised and only trains on the specific audio clip that is being denoised. Our experiments demonstrate favorable performance in comparison to the literature methods, and our code and audio samples are available at https: github.com mosheman5 DNP. Index Terms: Audio denoising; Unsupervised learning
Deep Image Priors The DIP method @cite_0 can be viewed a regularized inverse-problem method, in which the regularization is given implicitly, by training a deep CNN to fit the data. Specifically, a CNN of a given architecture is trained to produce the input image as its output, given a random tensor as the network's input. Assuming that the learning algorithm can fit a clean image much faster than it fits a noisy signal, the algorithm is stopped after it starts to fit the given image, but before it fits all of its details. In other words, there is a noise impedance'' effect that arises from the challenge of fitting the noisy signal, which is unpredictable in nature.
{ "cite_N": [ "@cite_0" ], "mid": [ "2771305881" ], "abstract": [ "Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, super-resolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs. Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity. Code and supplementary material are available at this https URL ." ] }
1904.07331
2951019448
Online tools provide unique access to research students' study habits and problem-solving behavior. In MOOCs, this online data can be used to inform instructors and to provide automatic guidance to students. However, these techniques may not apply in blended courses with face to face and online components. We report on a study of integrated user-system interaction logs from 3 computer science courses using four online systems: LMS, forum, version control, and homework system. Our results show that students rarely work across platforms in a single session, and that final class performance can be predicted from students' system use.
We focus on evaluating students' online tool use in terms of , consecutive sequences of study actions that occur between breaks for food or sleep. Sessions have been previously used to analyze student behavior in MOOCs and in other cohesive online tools such as an LMS (e.g. @cite_6 ). Our work here extends that research by applying to heterogeneous data from blended courses where students can work across platforms and over longer periods of time with the goal of developing early predictors that can be used to identify high- or low-performing students in time for an intervention.
{ "cite_N": [ "@cite_6" ], "mid": [ "2090861833" ], "abstract": [ "All forms of learning take time. There is a large body of research suggesting that the amount of time spent on learning can improve the quality of learning, as represented by academic performance. The wide-spread adoption of learning technologies such as learning management systems (LMSs), has resulted in large amounts of data about student learning being readily accessible to educational researchers. One common use of this data is to measure time that students have spent on different learning tasks (i.e., time-on-task). Given that LMS systems typically only capture times when students executed various actions, time-on-task measures are estimated based on the recorded trace data. LMS trace data has been extensively used in many studies in the field of learning analytics, yet the problem of time-on-task estimation is rarely described in detail and the consequences that it entails are not fully examined. This paper presents the results of a study that examined the effects of different time-on-task estimation methods on the results of commonly adopted analytical models. The primary goal of this paper is to raise awareness of the issue of accuracy and appropriateness surrounding time-estimation within the broader learning analytics community, and to initiate a debate about the challenges of this process. Furthermore, the paper provides an overview of time-on-task estimation methods in educational and related research fields." ] }
1904.07303
2936470548
Emerging neural networks based machine learning techniques such as deep learning and its variants have shown tremendous potential in many application domains. However, they raise serious privacy concerns due to the risk of leakage of highly privacy-sensitive data when data collected from users is used to train neural network models to support predictive tasks. To tackle such serious privacy concerns, several privacy-preserving approaches have been proposed in the literature that use either secure multi-party computation (SMC) or homomorphic encryption (HE) as the underlying mechanisms. However, neither of these cryptographic approaches provides an efficient solution towards constructing a privacy-preserving machine learning model, as well as supporting both the training and inference phases. To tackle the above issue, we propose a CryptoNN framework that supports training a neural network model over encrypted data by using the emerging functional encryption scheme instead of SMC or HE. We also construct a functional encryption scheme for basic arithmetic computation to support the requirement of the proposed CryptoNN framework. We present performance evaluation and security analysis of the underlying crypto scheme and show through our experiments that CryptoNN achieves accuracy that is similar to those of the baseline neural network models on the MNIST dataset.
. The concept of functional encryption was proposed by Amit Sahai and Brent Waters in @cite_42 , and the formal definition was presented by in @cite_48 . As a new technique for public-key cryptography @cite_45 , functional encryption can be viewed as the abstraction of most of the existing encryption schemes such as attribute-based encryption @cite_31 , inner production function @cite_55 , and order-revealing encryption @cite_36 . Generally, unlike traditional encryption schemes, where decryption reveals all or nothing, in a functional encryption scheme, the decryption keys may reveal only partial information about the plaintext, for instance, a computed result of a specific function.
{ "cite_N": [ "@cite_36", "@cite_48", "@cite_55", "@cite_42", "@cite_45", "@cite_31" ], "mid": [ "2154496743", "1724472458", "2082190228", "1498316612", "1988452287", "2076046175" ], "abstract": [ "Encryption is a well established technology for protecting sensitive data. However, once encrypted, data can no longer be easily queried aside from exact matches. We present an order-preserving encryption scheme for numeric data that allows any comparison operation to be directly applied on encrypted data. Query results produced are sound (no false hits) and complete (no false drops). Our scheme handles updates gracefully and new values can be added without requiring changes in the encryption of other values. It allows standard databse indexes to be built over encrypted tables and can easily be integrated with existing database systems. The proposed scheme has been designed to be deployed in application environments in which the intruder can get access to the encrypted database, but does not have prior domain information such as the distribution of values and annot encrypt or decrypt arbitrary values of his choice. The encryption is robust against estimation of the true value in such environments.", "We initiate the formal study of functional encryption by giving precise definitions of the concept and its security. Roughly speaking, functional encryption supports restricted secret keys that enable a key holder to learn a specific function of encrypted data, but learn nothing else about the data. For example, given an encrypted program the secret key may enable the key holder to learn the output of the program on a specific input without learning anything else about the program. We show that defining security for functional encryption is non-trivial. First, we show that a natural game-based definition is inadequate for some functionalities. We then present a natural simulation-based definition and show that it (provably) cannot be satisfied in the standard model, but can be satisfied in the random oracle model. We show how to map many existing concepts to our formalization of functional encryption and conclude with several interesting open problems in this young area.", "Predicate encryption is a generalized notion for public key encryption that enables one to encrypt attributes as well as a message. In this paper, we present a new inner-product encryption (IPE) scheme, as a specialized predicate encryption scheme, whose security relies on the well-known Decision Bilinear Diffie-Hellman (BDH) and Decision Linear assumptions. Our IPE scheme uses prime order groups equipped with a bilinear map and works in both symmetric and asymmetric bilinear maps. Our result is the first construction of IPE under the standard assumptions. Prior to our work, all IPE schemes known to date require non-standard assumptions to prove security, and moreover some of them use composite-order groups. To achieve our goal, we introduce a novel technique for attribute-hiding, which may be of independent interest.", "We introduce a new type of Identity-Based Encryption (IBE) scheme that we call Fuzzy Identity-Based Encryption. In Fuzzy IBE we view an identity as set of descriptive attributes. A Fuzzy IBE scheme allows for a private key for an identity, ω, to decrypt a ciphertext encrypted with an identity, ω ′, if and only if the identities ω and ω ′ are close to each other as measured by the “set overlap” distance metric. A Fuzzy IBE scheme can be applied to enable encryption using biometric inputs as identities; the error-tolerance property of a Fuzzy IBE scheme is precisely what allows for the use of biometric identities, which inherently will have some noise each time they are sampled. Additionally, we show that Fuzzy-IBE can be used for a type of application that we term “attribute-based encryption”. In this paper we present two constructions of Fuzzy IBE schemes. Our constructions can be viewed as an Identity-Based Encryption of a message under several attributes that compose a (fuzzy) identity. Our IBE schemes are both error-tolerant and secure against collusion attacks. Additionally, our basic construction does not use random oracles. We prove the security of our schemes under the Selective-ID security model.", "Decryption keys allow users to learn a specific function of the encrypted data and nothing else.", "We construct an Attribute-Based Encryption (ABE) scheme that allows a user's private key to be expressed in terms of any access formula over attributes. Previous ABE schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the Decisional Bilinear Diffie-Hellman (BDH) assumption. Furthermore, the performance of our new scheme compares favorably with existing, less-expressive schemes." ] }
1904.07303
2936470548
Emerging neural networks based machine learning techniques such as deep learning and its variants have shown tremendous potential in many application domains. However, they raise serious privacy concerns due to the risk of leakage of highly privacy-sensitive data when data collected from users is used to train neural network models to support predictive tasks. To tackle such serious privacy concerns, several privacy-preserving approaches have been proposed in the literature that use either secure multi-party computation (SMC) or homomorphic encryption (HE) as the underlying mechanisms. However, neither of these cryptographic approaches provides an efficient solution towards constructing a privacy-preserving machine learning model, as well as supporting both the training and inference phases. To tackle the above issue, we propose a CryptoNN framework that supports training a neural network model over encrypted data by using the emerging functional encryption scheme instead of SMC or HE. We also construct a functional encryption scheme for basic arithmetic computation to support the requirement of the proposed CryptoNN framework. We present performance evaluation and security analysis of the underlying crypto scheme and show through our experiments that CryptoNN achieves accuracy that is similar to those of the baseline neural network models on the MNIST dataset.
Several constructions of functional encryption has been proposed in the literature to deal with different functions such as inner-product @cite_3 , comparing the order @cite_48 , and access control @cite_26 . On the other hand, researchers also have focused on the functional encryption construction using different existing techniques such as multi-party computation @cite_18 and multilinear maps @cite_6 @cite_19 , or using functional encryption to construct other cryptographic schemes such as indistinguishablility obfuscation @cite_7 and program obfuscation @cite_46 . However, most of these schemes are in theoretical stages and not practical enough for applications. Recently, more progressive approaches such as multi-input functional encryption @cite_9 , functional encryption using Intel SGX @cite_8 and practical implementation @cite_20 @cite_17 have been proposed. To achieve secure function computation, the work proposed in @cite_20 is one of the underlying crypto systems in our proposed CryptoNN framework.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_7", "@cite_8", "@cite_48", "@cite_9", "@cite_6", "@cite_3", "@cite_19", "@cite_46", "@cite_20", "@cite_17" ], "mid": [ "65414435", "177444027", "2474861573", "2765706244", "1724472458", "", "2532553488", "2889153834", "", "2767154751", "209244779", "" ], "abstract": [ "We construct functional encryption schemes for polynomial-time computable functions secure against an a-priori bounded polynomial number of collusions. Our constructions require only semantically secure public-key encryption schemes and pseudorandom generators computable by small-depth circuits known to be implied by most concrete intractability assumptions. For certain special cases such as predicate encryption schemes with public index, the construction requires only semantically secure encryption schemes. Along the way, we show a \"bootstrapping theorem\" that builds a q-query functional encryption scheme for arbitrary functions starting from a q-query functional encryption scheme for bounded-degree functions. All our constructions rely heavily on techniques from secure multi-party computation and randomized encodings. Our constructions are secure under a strong simulation-based definition of functional encryption.", "We present two fully secure functional encryption schemes: a fully secure attribute-based encryption (ABE) scheme and a fully secure (attribute-hiding) predicate encryption (PE) scheme for inner-product predicates. In both cases, previous constructions were only proven to be selectively secure. Both results use novel strategies to adapt the dual system encryption methodology introduced by Waters. We construct our ABE scheme in composite order bilinear groups, and prove its security from three static assumptions. Our ABE scheme supports arbitrary monotone access formulas. Our predicate encryption scheme is constructed via a new approach on bilinear pairings using the notion of dual pairing vector spaces proposed by Okamoto and Takashima.", "In this work, we study indistinguishability obfuscation and functional encryption for general circuits: Indistinguishability obfuscation requires that given any two equivalent circuits @math and @math of similar size, the obfuscations of @math and @math should be computationally indistinguishable. In functional encryption, ciphertexts encrypt inputs @math and keys are issued for circuits @math . Using the key @math to decrypt a ciphertext @math yields the value @math but does not reveal anything else about @math . Furthermore, no collusion of secret key holders should be able to learn anything more than the union of what they can each learn individually. We give constructions for indistinguishability obfuscation and functional encryption that supports all polynomial-size circuits. We accomplish this goal in three steps: (1) We describe a candidate construction for indistinguishability obfuscation for @math circuits. The security of this construction is based on a new al...", "Functional encryption (FE) is an extremely powerful cryptographic mechanism that lets an authorized entity compute on encrypted data, and learn the results in the clear. However, all current cryptographic instantiations for general FE are too impractical to be implemented. We construct IRON, a provably secure, and practical FE system using Intel's recent Software Guard Extensions (SGX). We show that IRON can be applied to complex functionalities, and even for simple functions, outperforms the best known cryptographic schemes. We argue security by modeling FE in the context of hardware elements, and prove that IRON satisfies the security model.", "We initiate the formal study of functional encryption by giving precise definitions of the concept and its security. Roughly speaking, functional encryption supports restricted secret keys that enable a key holder to learn a specific function of encrypted data, but learn nothing else about the data. For example, given an encrypted program the secret key may enable the key holder to learn the output of the program on a specific input without learning anything else about the program. We show that defining security for functional encryption is non-trivial. First, we show that a natural game-based definition is inadequate for some functionalities. We then present a natural simulation-based definition and show that it (provably) cannot be satisfied in the standard model, but can be satisfied in the random oracle model. We show how to map many existing concepts to our formalization of functional encryption and conclude with several interesting open problems in this young area.", "", "Secure multilinear maps (mmaps) have been shown to have remarkable applications in cryptography, such as multi-input functional encryption (MIFE) and program obfuscation. To date, there has been little evaluation of the performance of these applications. In this paper we initiate a systematic study of mmap-based constructions. We build a general framework, called 5Gen, to experiment with these applications. At the top layer we develop a compiler that takes in a high-level program and produces an optimized matrix branching program needed for the applications we consider. Next, we optimize and experiment with several MIFE and obfuscation constructions and evaluate their performance. The 5Gen framework is modular and can easily accommodate new mmap constructions as well as new MIFE and obfuscation constructions, as well as being an open-source tool that can be used by other research groups to experiment with a variety of mmap-based constructions.", "In a functional encryption scheme, secret keys are associated with functions and ciphertexts are associated with messages. Given a secret key for a function f, and a ciphertext for a message x, a decryptor learns f(x) and nothing else about x. Inner product encryption is a special case of functional encryption where both secret keys and ciphertext are associated with vectors. The combination of a secret key for a vector ( x ) and a ciphertext for a vector ( y ) reveal ( x , y ) and nothing more about ( y ). An inner product encryption scheme is function-hiding if the keys and ciphertexts reveal no additional information about both ( x ) and ( y ) beyond their inner product.", "", "Program obfuscation is a powerful security primitive with many applications. White-box cryptography studies a particular subset of program obfuscation targeting keyed pseudorandom functions (PRFs), a core component of systems such as mobile payment and digital rights management. Although the white-box obfuscators currently used in practice do not come with security proofs and are thus routinely broken, recent years have seen an explosion of cryptographic techniques for obfuscation, with the goal of avoiding this build-and-break cycle. In this work, we explore in detail cryptographic program obfuscation and the related primitive of multi-input functional encryption (MIFE). In particular, we extend the 5Gen framework (CCS 2016) to support circuit-based MIFE and program obfuscation, implementing both existing and new constructions. We then evaluate and compare the efficiency of these constructions in the context of PRF obfuscation. As part of this work we (1) introduce a novel instantiation of MIFE that works directly on functions represented as arithmetic circuits, (2) use a known transformation from MIFE to obfuscation to give us an obfuscator that performs better than all prior constructions, and (3) develop a compiler for generating circuits optimized for our schemes. Finally, we provide detailed experiments, demonstrating, among other things, the ability to obfuscate a PRF with a 64-bit key and 12 bits of input (containing 62k gates) in under 4 hours, with evaluation taking around 1 hour. This is by far the most complex function obfuscated to date.", "Functional encryption is a new paradigm in public-key encryption that allows users to finely control the amount of information that is revealed by a ciphertext to a given receiver. Recent papers have focused their attention on constructing schemes for general functionalities at expense of efficiency. Our goal, in this paper, is to construct functional encryption schemes for less general functionalities which are still expressive enough for practical scenarios. We propose a functional encryption scheme for the inner-product functionality, meaning that decrypting an encrypted vector ( x ) with a key for a vector ( y ) will reveal only ( x , y ) and nothing else, whose security is based on the DDH assumption. Despite the simplicity of this functionality, it is still useful in many contexts like descriptive statistics. In addition, we generalize our approach and present a generic scheme that can be instantiated, in addition, under the LWE assumption and offers various trade-offs in terms of expressiveness and efficiency.", "" ] }
1904.07310
2938958646
This paper presents an empirical analysis of Steemit, a key representative of the emerging incentivized social media platforms over Blockchains, to understand and evaluate the actual level of decentralization and the practical effects of cryptocurrency-driven reward system in these modern social media platforms. Similar to Bitcoin, Steemit is operated by a decentralized community, where 21 members are periodically elected to cooperatively operate the platform through the Delegated Proof-of-Stake (DPoS) consensus protocol. Our study performed on 539 million operations performed by 1.12 million Steemit users during the period 2016 03 to 2018 08 reveals that the actual level of decentralization in Steemit is far lower than the ideal level, indicating that the DPoS consensus protocol may not be a desirable approach for establishing a highly decentralized social media platform. In Steemit, users create contents as posts which get curated based on votes from other users. The platform periodically issues cryptocurrency as rewards to creators and curators of popular posts. Although such a reward system is originally driven by the desire to incentivize users to contribute to high-quality contents, our analysis of the underlying cryptocurrency transfer network on the blockchain reveals that more than 16 transfers of cryptocurrency in Steemit are sent to curators suspected to be bots and also finds the existence of an underlying supply network for the bots, both suggesting a significant misuse of the current reward system in Steemit. Our study is designed to provide insights on the current state of this emerging blockchain-based social media platform including the effectiveness of its design and the operation of the consensus protocols and the reward system.
In recent years, due to the rapid growth and consistent popularity, social media platforms have received significant attention from researchers. A large number of research papers have analyzed several popular social media platforms from various perspectives @cite_30 @cite_5 @cite_27 @cite_11 @cite_6 @cite_14 @cite_19 @cite_25 @cite_28 . Tan @cite_19 investigated users' behavior in and found that users continually post in new communities. Singer @cite_6 observed a general quality drop of comments made by users during activity sessions. Hessel @cite_25 investigated the interactions between highly related communities and found that users engaged in a newer community tend to be more active in their original community. @cite_14 , the authors studied the browsing and voting behavior of users and found that most users do not read the article that they vote on. An extensive survey of recent research on is provided in @cite_3 . Besides , several other social media platforms have also been analyzed by researchers. Wang @cite_5 analyzed the platform and found that the quality of Quora’s knowledge base is mainly contributed by its user heterogeneity and question graphs. Anderson @cite_28 investigated the platform and observed significant assortativity in the reputations of co-answerers, relationships between reputation and answer speed.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_28", "@cite_6", "@cite_3", "@cite_19", "@cite_27", "@cite_5", "@cite_25", "@cite_11" ], "mid": [ "1914571458", "2597730742", "2134406267", "2364057241", "2950322928", "1490782231", "", "1730818938", "2949603202", "2811485357" ], "abstract": [ "", "As crowd-sourced curation of news and information become the norm, it is important to understand not only how individuals consume information through social news Web sites, but also how they contribute to their ranking systems. In this paper, we introduce and make available a new data set containing the activity logs that recorded all activity for 309 Reddit users for one year. Using this newly collected data, we present findings that highlight the browsing and voting behavior of the study’s participants. We find that most users do not read the article that they vote on, and that, in total, 73 of posts were rated (i.e., upvoted or downvoted) without first viewing the content. We also show the evidence of cognitive fatigue in the browsing sessions of users that are most likely to vote.", "Question answering (Q&A) websites are now large repositories of valuable knowledge. While most Q&A sites were initially aimed at providing useful answers to the question asker, there has been a marked shift towards question answering as a community-driven knowledge creation process whose end product can be of enduring value to a broad audience. As part of this shift, specific expertise and deep knowledge of the subject at hand have become increasingly important, and many Q&A sites employ voting and reputation mechanisms as centerpieces of their design to help users identify the trustworthiness and accuracy of the content. To better understand this shift in focus from one-off answers to a group knowledge-creation process, we consider a question together with its entire set of corresponding answers as our fundamental unit of analysis, in contrast with the focus on individual question-answer pairs that characterized previous work. Our investigation considers the dynamics of the community activity that shapes the set of answers, both how answers and voters arrive over time and how this influences the eventual outcome. For example, we observe significant assortativity in the reputations of co-answerers, relationships between reputation and answer speed, and that the probability of an answer being chosen as the best one strongly depends on temporal characteristics of answer arrivals. We then show that our understanding of such properties is naturally applicable to predicting several important quantities, including the long-term value of the question and its answers, as well as whether a question requires a better answer. Finally, we discuss the implications of these results for the design of Q&A sites.", "This article presents evidence of performance deterioration in online user sessions quantified by studying a massive dataset containing over 55 million comments posted on Reddit in April 2015. After segmenting the sessions (i.e., periods of activity without a prolonged break) depending on their intensity (i.e., how many posts users produced during sessions), we observe a general decrease in the quality of comments produced by users over the course of sessions. We propose mixed-effects models that capture the impact of session intensity on comments, including their length, quality, and the responses they generate from the community. Our findings suggest performance deterioration: Sessions of increasing intensity are associated with the production of shorter, progressively less complex comments, which receive declining quality scores (as rated by other users), and are less and less engaging (i.e., they attract fewer responses). Our contribution evokes a connection between cognitive and attention dynamics and the usage of online social peer production platforms, specifically the effects of deterioration of user performance.", "Online forums provide rich environments where users may post questions and comments about different topics. Understanding how people behave in online forums may shed light on the fundamental mechanisms by which collective thinking emerges in a group of individuals, but it has also important practical applications, for instance to improve user experience, increase engagement or automatically identify bullying. Importantly, the datasets generated by the activity of the users are often openly available for researchers, in contrast to other sources of data in computational social science. In this survey, we map the main research directions that arose in recent years and focus primarily on the most popular platform, Reddit. We distinguish and categorise research depending on their focus on the posts or on the users, and point to different types of methodologies to extract information from the structure and dynamics of the system. We emphasize the diversity and richness of the research in terms of questions and methods, and suggest future avenues of research.", "Although analyzing user behavior within individual communities is an active and rich research domain, people usually interact with multiple communities both on- and off-line. How do users act in such multi-community environments? Although there are a host of intriguing aspects to this question, it has received much less attention in the research community in comparison to the intra-community case. In this paper, we examine three aspects of multi-community engagement: the sequence of communities that users post to, the language that users employ in those communities, and the feedback that users receive, using longitudinal posting behavior on Reddit as our main data source, and DBLP for auxiliary experiments. We also demonstrate the effectiveness of features drawn from these aspects in predicting users' future level of activity. One might expect that a user's trajectory mimics the \"settling-down\" process in real life: an initial exploration of sub-communities before settling down into a few niches. However, we find that the users in our data continually post in new communities; moreover, as time goes on, they post increasingly evenly among a more diverse set of smaller communities. Interestingly, it seems that users that eventually leave the community are \"destined\" to do so from the very beginning, in the sense of showing significantly different \"wandering\" patterns very early on in their trajectories; this finding has potentially important design implications for community maintainers. Our multi-community perspective also allows us to investigate the \"situation vs. personality\" debate from language usage across different communities.", "", "Efforts such as Wikipedia have shown the ability of user communities to collect, organize and curate information on the Internet. Recently, a number of question and answer (Q&A) sites have successfully built large growing knowledge repositories, each driven by a wide range of questions and answers from its users community. While sites like Yahoo Answers have stalled and begun to shrink, one site still going strong is Quora, a rapidly growing service that augments a regular Q&A system with social links between users. Despite its success, however, little is known about what drives Quora's growth, and how it continues to connect visitors and experts to the right questions as it grows. In this paper, we present results of a detailed analysis of Quora using measurements. We shed light on the impact of three different connection networks (or graphs) inside Quora, a graph connecting topics to users, a social graph connecting users, and a graph connecting related questions. Our results show that heterogeneity in the user and question graphs are significant contributors to the quality of Quora's knowledge base. One drives the attention and activity of users, and the other directs them to a small set of popular and interesting questions.", "When large social-media platforms allow users to easily form and self-organize into interest groups, highly related communities can arise. For example, the Reddit site hosts not just a group called food, but also HealthyFood, foodhacks, foodporn, and cooking, among others. Are these highly related communities created for similar classes of reasons (e.g., to focus on a subtopic, to create a place for allegedly more \"high-minded\" discourse, etc.)? How do users allocate attention between such close alternatives when they are available or emerge over time? Are there different types of relations between close alternatives such as sharing many users vs. a new community drawing away members of an older one vs. a splinter group failing to cohere into a viable separate community? We investigate the interactions between highly related communities using data from reddit.com consisting of 975M posts and comments spanning an 8-year period. We identify a set of typical affixes that users adopt to create highly related communities and build a taxonomy of affixes. One interesting finding regarding users' behavior is: after a newer community is created, for several types of highly-related community pairs, users that engage in a newer community tend to be more active in their original community than users that do not explore, even when controlling for previous level of engagement.", "" ] }
1904.07319
2937692503
Many previous works approach vision-based robotic grasping by training a value network that evaluates grasp proposals. These approaches require an optimization process at run-time to infer the best action from the value network. As a result, the inference time grows exponentially as the dimension of action space increases. We propose an alternative method, by directly training a neural density model to approximate the conditional distribution of successful grasp poses from the input images. We construct a neural network that combines Gaussian mixture and normalizing flows, which is able to represent multi-modal, complex probability distributions. We demonstrate on both simulation and real robot that the proposed actor model achieves similar performance compared to the value network using the Cross-Entropy Method (CEM) for inference, on top-down grasping with a 4 dimensional action space. Our actor model reduces the inference time by 3 times compared to the state-of-the-art CEM method. We believe that actor models will play an important role when scaling up these approaches to higher dimensional action spaces.
Generative Adversarial Networks (GAN) have received a lot of attention in modeling probability distributions for images @cite_9 @cite_5 @cite_2 and have also been used successfully in imitation learning @cite_6 settings. However, because these generators cannot compute the probability density of generated samples, a discriminator is required to provide a training signal, and balancing the interplay between these two components is difficult. Similar works in energy-based models @cite_1 @cite_17 also learn generators for which the probability density of generated samples cannot be computed.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_6", "@cite_2", "@cite_5", "@cite_17" ], "mid": [ "2099471712", "2949561945", "2434014514", "", "", "2952838738" ], "abstract": [ "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actor-critic methods, which can be viewed performing approximate inference on the corresponding energy-based model.", "Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments.", "", "", "There has been a lot of recent interest in designing neural network models to estimate a distribution from a set of examples. We introduce a simple modification for autoencoder neural networks that yields powerful generative models. Our method masks the autoencoder's parameters to respect autoregressive constraints: each input is reconstructed only from previous inputs in a given ordering. Constrained this way, the autoencoder outputs can be interpreted as a set of conditional probabilities, and their product, the full joint probability. We can also train a single network that can decompose the joint probability in multiple different orderings. Our simple framework can be applied to multiple architectures, including deep ones. Vectorized implementations, such as on GPUs, are simple and fast. Experiments demonstrate that this approach is competitive with state-of-the-art tractable distribution estimators. At test time, the method is significantly faster and scales better than other autoregressive estimators." ] }
1904.07319
2937692503
Many previous works approach vision-based robotic grasping by training a value network that evaluates grasp proposals. These approaches require an optimization process at run-time to infer the best action from the value network. As a result, the inference time grows exponentially as the dimension of action space increases. We propose an alternative method, by directly training a neural density model to approximate the conditional distribution of successful grasp poses from the input images. We construct a neural network that combines Gaussian mixture and normalizing flows, which is able to represent multi-modal, complex probability distributions. We demonstrate on both simulation and real robot that the proposed actor model achieves similar performance compared to the value network using the Cross-Entropy Method (CEM) for inference, on top-down grasping with a 4 dimensional action space. Our actor model reduces the inference time by 3 times compared to the state-of-the-art CEM method. We believe that actor models will play an important role when scaling up these approaches to higher dimensional action spaces.
Our method is also related to imitation learning. In @cite_6 GAN is used to match the generator model to the distribution of demonstration actions. Instead of learning from expert demonstrations, our model learns from positive examples accumulated from random trials, and continues to improve itself by executing samples from the learned distribution and receiving binary rewards in a self-supervised manner.
{ "cite_N": [ "@cite_6" ], "mid": [ "2434014514" ], "abstract": [ "Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments." ] }
1904.07312
2936027599
Drowsiness can put lives of many drivers and workers in danger. It is important to design practical and easy-to-deploy real-world systems to detect the onset of drowsiness.In this paper, we address early drowsiness detection, which can provide early alerts and offer subjects ample time to react. We present a large and public real-life dataset of 60 subjects, with video segments labeled as alert, low vigilant, or drowsy. This dataset consists of around 30 hours of video, with contents ranging from subtle signs of drowsiness to more obvious ones. We also benchmark a temporal model for our dataset, which has low computational and storage demands. The core of our proposed method is a Hierarchical Multiscale Long Short-Term Memory (HM-LSTM) network, that is fed by detected blink features in sequence. Our experiments demonstrate the relationship between the sequential blink features and drowsiness. In the experimental results, our baseline method produces higher accuracy than human judgment.
As pointed out above, there are numerous works in drowsiness detection, but none of them uses a dataset that is both public and realistic. As a result, it is difficult to compare prior methods to each other and to decide what the state of the art is in this area. Several existing methods @cite_6 @cite_19 @cite_24 @cite_2 @cite_16 were evaluated on a small number of subjects without sharing the videos. In some cases @cite_3 @cite_15 the subjects were instructed to act drowsy, as opposed to obtaining data from subjects who were really drowsy.
{ "cite_N": [ "@cite_6", "@cite_3", "@cite_24", "@cite_19", "@cite_2", "@cite_15", "@cite_16" ], "mid": [ "", "", "2091984081", "2067135396", "2045746997", "2738749209", "2396724964" ], "abstract": [ "", "", "Nowadays, driving support systems, such as car navigation systems, are getting common, and they support drivers in several aspects. It is important for driving support systems to detect status of driver's consciousness. Particularly, detecting driver's drowsiness could prevent drivers from collisions caused by drowsy driving. In this paper, we discuss a system and a method for detecting driver's drowsiness, which use a property of blinks waveforms that driver's eye-closing time is strongly connected to their drowsiness. Moreover, our system detects the drowsiness robustly for individual differences by using three factors extracted from the individual blink waveform.", "Research has shown that sustained attention or vigilance declines over time on task. Sustained attention is necessary in many environments such as air traffic controllers, cyber operators, and imagery analysts. A lapse of attention in any one of these environments can have harmful consequences. The purpose of this study was to determine if eye blink metrics from an eye-tracker are related to changes in vigilance performance and cerebral blood flow velocities. Nineteen participants performed a vigilance task while wearing an eye-tracker on four separate days. Blink frequency and duration changed significantly over time during the task. Both blink frequency and duration increased as performance declined and right cerebral blood flow velocity declined. These results suggest that eye blink information may be an indicator of arousal levels. Using an eye-tracker to detect changes in eye blinks in an operational environment would allow preventative measures to be implemented, perhaps by providing perceptual warning signals or augmenting human cognition through non-invasive brain stimulation techniques. Language: en", "Drowsiness is one of the main causes of severe traffic accidents occurring in our daily life. In order to reduce the number of drowsiness-induced accidents, various researches have been conducted with the aim of finding practical and non-invasive drowsiness detection systems by using behavioral measuring techniques. Many of the previous works on behavioral measuring techniques have mainly focused on the analysis of eye closure and blinking of the driver. It is recently that more attention started to shift to inclusion of other facial expressions and only few, among those researches, have been done on the analysis of temporal dynamics of facial expressions for drowsiness detection. In this paper we propose a new method of analyzing the facial expression of the driver through Hidden Markov Model (HMM) based dynamic modeling to detect drowsiness. We have implemented the algorithm using a simulated driving setup. Experimental results verified the effectiveness of the proposed method.", "Driver’s status is crucial because one of the main reasons for motor vehicular accidents is related to driver’s inattention or drowsiness. Drowsiness detector on a car can reduce numerous accidents. Accidents occur because of a single moment of negligence, thus driver monitoring system which works in real-time is necessary. This detector should be deployable to an embedded device and perform at high accuracy. In this paper, a novel approach towards real-time drowsiness detection based on deep learning which can be implemented on a low cost embedded board and performs with a high accuracy is proposed. Main contribution of our paper is compression of heavy baseline model to a light weight model deployable to an embedded board. Moreover, minimized network structure was designed based on facial landmark input to recognize whether driver is drowsy or not. The proposed model achieved an accuracy of 89.5 on 3-class classification and speed of 14.9 frames per second (FPS) on Jetson TK1.", "Fatigue during long-time driving threatens the safety of drivers and transportation. In this paper, we provide an effective method based on multi-sensor signals collected from Kinect2.0 camera and PPG pulse sensor to build a driver fatigue detection system. Unlike most traditional works, we define the transitional process of fatigue and elaborate its effect on training classifiers. The simulation experiments are then designed and 15 groups of data are collected. Our method works in the following steps: 1) feature extraction and fusion, 2) sample labelling and 3) SVM classifier designing. The 10-fold cross-validation accuracy of the classifier is 90.10 and the test accuracy is 83.82 . Experimental results verify that our method to deal with samples in transitional process is universal and more accurate than traditional methods. Moreover, our method based on multi-sensor works better than those dealing with single-sensor." ] }
1904.07312
2936027599
Drowsiness can put lives of many drivers and workers in danger. It is important to design practical and easy-to-deploy real-world systems to detect the onset of drowsiness.In this paper, we address early drowsiness detection, which can provide early alerts and offer subjects ample time to react. We present a large and public real-life dataset of 60 subjects, with video segments labeled as alert, low vigilant, or drowsy. This dataset consists of around 30 hours of video, with contents ranging from subtle signs of drowsiness to more obvious ones. We also benchmark a temporal model for our dataset, which has low computational and storage demands. The core of our proposed method is a Hierarchical Multiscale Long Short-Term Memory (HM-LSTM) network, that is fed by detected blink features in sequence. Our experiments demonstrate the relationship between the sequential blink features and drowsiness. In the experimental results, our baseline method produces higher accuracy than human judgment.
The DROZY dataset @cite_8 , contains multiple types of drowsiness-related data including signals such as EEG, EOG and near-infrared (NIR) images. An advantage of the DROZY dataset is that drowsiness data are obtained by subjects who are really drowsy, as opposed to pretending to be drowsy. Compared to the DROZY dataset, our dataset has three advantages: First, we have a substantially larger number of subjects (60 as opposed to 14). Second, for each subject, we have data showing that subject in each of the three predefined alertness classes, whereas in the DROZY dataset some subjects are not recorded in all three states. Third, in DROZY all videos were captured using the same camera position and background, under controlled lab conditions, whereas in our dataset each subject used their own cell phone and a different background. Compared to DROZY, our dataset also has the important difference that it provides color video, whereas DROZY offers several other modalities, but only NIR video.
{ "cite_N": [ "@cite_8" ], "mid": [ "2395437686" ], "abstract": [ "Drowsiness is a major cause of accidents, in particular in road transportation. It is thus crucial to develop robust drowsiness monitoring systems. There is a widespread agreement that the best way to monitor drowsiness is by closely monitoring symptoms of drowsiness that are directly linked to the physiology of an operator such as a driver. The best systems are completely transparent to the operator until the moment he she must react. In transportation, cameras placed in the passenger compartment and looking at least at the face of the driver are most likely the best way to sense physiology related symptoms such as facial expressions and the fine behavior of the eyeballs and eyelids. We present here the new database1 called DROZY that provides multiple modalities of data to tackle the design of drowsiness monitoring systems and related experiments. We also present two novel systems developed using this database that can make predictions about the speed of reaction of an operator by using near-infrared intensity and range images of his her face." ] }