aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1609.07769
2949349317
In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain streak accumulation. Our core ideas lie in the new rain image models and a novel deep learning architecture. We first modify an existing model comprising a rain streak layer and a background layer, by adding a binary map that locates rain streak regions. Second, we create a new model consisting of a component representing rain streak accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog), and another component representing various shapes and directions of overlapping rain streaks, which usually happen in heavy rain. Based on the first model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output. The additional binary map is critically beneficial, since its loss function can provide additional strong information to the network. To handle rain streak accumulation (again, a phenomenon visually similar to mist or fog) and various shapes and directions of overlapping rain streaks, we propose a recurrent rain detection and removal network that removes rain streaks and clears up the rain accumulation iteratively and progressively. In each recurrence of our method, a new contextualized dilated network is developed to exploit regional contextual information and outputs better representation for rain detection. The evaluation on real images, particularly on heavy rain, shows the effectiveness of our novel models and architecture, outperforming the state-of-the-art methods significantly. Our codes and data sets will be publicly available.
In recent years, deep learning-based image processing applications emerged with promising performance. These applications include denoising @cite_36 @cite_31 @cite_21 @cite_5 @cite_18 , completion @cite_34 , super-resolution @cite_23 @cite_25 @cite_19 @cite_26 , deblurring @cite_22 , deconvolution @cite_16 and style transfer @cite_32 @cite_29 , . There are also some recent works on bad weather restoration or image enhancement, such as dehazing @cite_3 @cite_7 , rain drop and dirt removal @cite_9 and light enhancement @cite_37 . Besides, with the superior modeling capacity than shallow models, DL-based methods begin to solve harder problems, such as blind image denoising @cite_15 . In this paper, we use deep learning to jointly detect and remove rain.
{ "cite_N": [ "@cite_22", "@cite_36", "@cite_29", "@cite_3", "@cite_5", "@cite_15", "@cite_18", "@cite_21", "@cite_23", "@cite_37", "@cite_26", "@cite_7", "@cite_32", "@cite_19", "@cite_16", "@cite_34", "@cite_25", "@cite_9", "@cite_31" ], "mid": [ "2952548986", "2145094598", "1920280450", "2256362396", "2098477387", "", "2594483002", "2109337973", "2949064199", "2952558436", "", "2519481857", "1924619199", "135113724", "2124964692", "2146337213", "", "2154815154", "2183227662" ], "abstract": [ "We describe a learning-based approach to blind image deconvolution. It uses a deep layered architecture, parts of which are borrowed from recent work on neural network learning, and parts of which incorporate computations that are specific to image deconvolution. The system is trained end-to-end on a set of artificially generated training examples, enabling competitive performance in blind deconvolution, both with respect to quality and runtime.", "We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.", "Photo retouching enables photographers to invoke dramatic visual impressions by artistically enhancing their photos through stylistic color and tone adjustments. However, it is also a time-consuming and challenging task that requires advanced skills beyond the abilities of casual photographers. Using an automated algorithm is an appealing alternative to manual work, but such an algorithm faces many hurdles. Many photographic styles rely on subtle adjustments that depend on the image content and even its semantics. Further, these adjustments are often spatially varying. Existing automatic algorithms are still limited and cover only a subset of these challenges. Recently, deep learning has shown unique abilities to address hard problems. This motivated us to explore the use of deep neural networks (DNNs) in the context of photo editing. In this article, we formulate automatic photo adjustment in a manner suitable for this approach. We also introduce an image descriptor accounting for the local semantics of an image. Our experiments demonstrate that training DNNs using these descriptors successfully capture sophisticated photographic styles. In particular and unlike previous techniques, it can model local adjustments that depend on image semantics. We show that this yields results that are qualitatively and quantitatively better than previous work.", "Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions priors in image dehazing. Specifically, the layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called bilateral rectified linear unit, which is able to improve the quality of recovered haze-free image. We establish connections between the components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use.", "We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind de-noising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.", "", "", "Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. In (2012), we show that multi-layer perceptrons can achieve outstanding image denoising performance for various types of noise (additive white Gaussian noise, mixed Poisson-Gaussian noise, JPEG artifacts, salt-and-pepper noise and noise resembling stripes). In this work we discuss in detail which trade-os have to be considered during the training procedure. We will show how to achieve good results and which pitfalls to avoid. By analysing the activation patterns of the hidden units we are able to make observations regarding the functioning principle of multi-layer perceptrons trained for image denoising.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.", "In surveillance, monitoring and tactical reconnaissance, gathering the right visual information from a dynamic environment and accurately processing such data are essential ingredients to making informed decisions which determines the success of an operation. Camera sensors are often cost-limited in ability to clearly capture objects without defects from images or videos taken in a poorly-lit environment. The goal in many applications is to enhance the brightness, contrast and reduce noise content of such images in an on-board real-time manner. We propose a deep autoencoder-based approach to identify signal features from low-light images handcrafting and adaptively brighten images without over-amplifying the lighter parts in images (i.e., without saturation of image pixels) in high dynamic range. We show that a variant of the recently proposed stacked-sparse denoising autoencoder can learn to adaptively enhance and denoise from synthetically darkened and noisy training examples. The network can then be successfully applied to naturally low-light environment and or hardware degraded images. Results show significant credibility of deep learning based approaches both visually and by quantitative comparison with various popular enhancing, state-of-the-art denoising and hybrid enhancing-denoising techniques.", "", "The performance of existing image dehazing methods is limited by hand-designed features, such as the dark channel, color disparity and maximum contrast, with complex fusion schemes. In this paper, we propose a multi-scale deep neural network for single-image dehazing by learning the mapping between hazy images and their corresponding transmission maps. The proposed algorithm consists of a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale net which refines results locally. To train the multi-scale deep network, we synthesize a dataset comprised of hazy images and corresponding transmission maps based on the NYU Depth dataset. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of quality and speed.", "In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.", "In this paper, we propose a new model called deep network cascade (DNC) to gradually upscale low-resolution images layer by layer, each layer with a small scale factor. DNC is a cascade of multiple stacked collaborative local auto-encoders. In each layer of the cascade, non-local self-similarity search is first performed to enhance high-frequency texture details of the partitioned patches in the input image. The enhanced image patches are then input into a collaborative local auto-encoder (CLA) to suppress the noises as well as collaborate the compatibility of the overlapping patches. By closing the loop on non-local self-similarity search and CLA in a cascade layer, we can refine the super-resolution result, which is further fed into next layer until the required image scale. Experiments on image super-resolution demonstrate that the proposed DNC can gradually upscale a low-resolution image with the increase of network layers and achieve more promising results in visual quality as well as quantitative performance.", "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.", "We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method's performance in the image denoising task is comparable to that of KSVD which is a widely used sparse coding technique. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random. Moreover, the proposed method does not need the information regarding the region that requires inpainting to be given a priori. Experimental results demonstrate the effectiveness of the proposed method in the tasks of image denoising and blind inpainting. We also show that our new training scheme for DA is more effective and can improve the performance of unsupervised feature learning.", "", "Photographs taken through a window are often compromised by dirt or rain present on the window surface. Common cases of this include pictures taken from inside a vehicle, or outdoor security cameras mounted inside a protective enclosure. At capture time, defocus can be used to remove the artifacts, but this relies on achieving a shallow depth-of-field and placement of the camera close to the window. Instead, we present a post-capture image processing solution that can remove localized rain and dirt artifacts from a single image. We collect a dataset of clean corrupted image pairs which are then used to train a specialized form of convolutional neural network. This learns how to map corrupted image patches to clean ones, implicitly capturing the characteristic appearance of dirt and water droplets in natural images. Our models demonstrate effective removal of dirt and rain in outdoor test conditions.", "Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with plain multi layer perceptrons (MLP) applied to image patches. We will show that by training on large image databases we are able to outperform the current state-of-the-art image denoising methods. In addition, our method achieves results that are superior to one type of theoretical bound and goes a large way toward closing the gap with a second type of theoretical bound. Our approach is easily adapted to less extensively studied types of noise, such as mixed Poisson-Gaussian noise, JPEG artifacts, salt-and-pepper noise and noise resembling stripes, for which we achieve excellent results as well. We will show that combining a block-matching procedure with MLPs can further improve the results on certain images. In a second paper, we detail the training trade-offs and the inner mechanisms of our MLPs." ] }
1609.07711
2526925007
A multi-user cognitive (secondary) radio system is considered, where the spatial multiplexing mode of operation is implemented amongst the nodes, under the presence of multiple primary transmissions. The secondary receiver carries out minimum mean-squared error (MMSE) detection to effectively decode the secondary data streams, while it performs spectrum sensing at the remaining signal to capture the presence of primary activity or not. New analytical closed-form expressions regarding some important system measures are obtained, namely, the outage and detection probabilities; the transmission power of the secondary nodes; the probability of unexpected interference at the primary nodes; blue and the detection efficiency with the aid of the area under the receive operating characteristics curve . The realistic scenarios of channel fading time variation and channel estimation errors are encountered for the derived results. Finally, the enclosed numerical results verify the accuracy of the proposed framework, while some useful engineering insights are also revealed, such as the key role of the detection accuracy to the overall performance and the impact of transmission power from the secondary nodes to the primary system.
The performance of spectrum sensing, i.e., the accuracy of the detection method used by the cognitive system plays a key role to the performance of both the primary and secondary network. It acts as an important tool for finding idle spectrum instances (the so-called @cite_1 ) to efficiently deliver cognitive data, while protecting the communication quality of the primary service at the same time. Several spectrum sensing approaches have been proposed so far to preserve transparency of CR networks, which can be categorized into two main types; quiet @cite_30 and active @cite_38 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_1" ], "mid": [ "2084436032", "2096380192", "2071707134" ], "abstract": [ "In a cognitive radio network, the secondary users are allowed to utilize the frequency bands of primary users when these bands are not currently being used. To support this spectrum reuse functionality, the secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance in cognitive radio networks. There are two parameters associated with spectrum sensing: probability of detection and probability of false alarm. The higher the probability of detection, the better the primary users are protected. However, from the secondary users' perspective, the lower the probability of false alarm, the more chances the channel can be reused when it is available, thus the higher the achievable throughput for the secondary network. In this paper, we study the problem of designing the sensing duration to maximize the achievable throughput for the secondary network under the constraint that the primary users are sufficiently protected. We formulate the sensing-throughput tradeoff problem mathematically, and use energy detection sensing scheme to prove that the formulated problem indeed has one optimal sensing time which yields the highest throughput for the secondary network. Cooperative sensing using multiple mini-slots or multiple secondary users are also studied using the methodology proposed in this paper. Computer simulations have shown that for a 6 MHz channel, when the frame duration is 100 ms, and the signal-to-noise ratio of primary user at the secondary receiver is -20 dB, the optimal sensing time achieving the highest throughput while maintaining 90 detection probability is 14.2 ms. This optimal sensing time decreases when distributed spectrum sensing is applied.", "Spectrum sensing is critical for cognitive systems to locate spectrum holes. In the IEEE 802.22 proposal, short quiet periods are arranged inside frames to perform a coarse intra-frame sensing as a pre-alarm for fine inter-frame sensing. However, the limited sample size of the quiet periods may not guarantee a satisfying performance and an additional burden of quiet-period synchronization is required. To improve the sensing performance, we first propose a quiet-active sensing scheme in which inactive customer-provided equipments (CPEs) will sense the channels in both the quiet and active periods. To avoid quiet-period synchronization, we further propose to utilize (optimized) active sensing, in which the quiet periods are replaced by 'quiet samples' in other domains, such as quiet sub-carriers in OFDMA systems. By doing so, we not only save the need for synchronization, but also achieve selection diversity by choosing quiet sub-carriers based on channel conditions. The proposed active sensing scheme is also promising for spectrum sharing applications where both the cognitive and primary systems can be active simultaneously.", "Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: spl middot highly reliable communication whenever and wherever needed; spl middot efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio." ] }
1609.07711
2526925007
A multi-user cognitive (secondary) radio system is considered, where the spatial multiplexing mode of operation is implemented amongst the nodes, under the presence of multiple primary transmissions. The secondary receiver carries out minimum mean-squared error (MMSE) detection to effectively decode the secondary data streams, while it performs spectrum sensing at the remaining signal to capture the presence of primary activity or not. New analytical closed-form expressions regarding some important system measures are obtained, namely, the outage and detection probabilities; the transmission power of the secondary nodes; the probability of unexpected interference at the primary nodes; blue and the detection efficiency with the aid of the area under the receive operating characteristics curve . The realistic scenarios of channel fading time variation and channel estimation errors are encountered for the derived results. Finally, the enclosed numerical results verify the accuracy of the proposed framework, while some useful engineering insights are also revealed, such as the key role of the detection accuracy to the overall performance and the impact of transmission power from the secondary nodes to the primary system.
More recently, authors in @cite_8 and @cite_16 proposed a spatial isolation technique on the antennas of each cognitive node in a sense that some antennas are devoted for spectrum sensing while others for data transmission. The main drawback of this approach is the large amount of self-interference produced during spectrum sensing, which can not always be sufficiently canceled. Hardware constraints and or impairments represent an immediate obstacle, whereas an appropriate physical distance between the sensing and transmitting antennas should be maintained (i.e., in the order of @math cm @cite_37 @cite_12 ), which is not always feasible or preferable for simple small-sized equipment.
{ "cite_N": [ "@cite_37", "@cite_16", "@cite_12", "@cite_8" ], "mid": [ "2106543408", "2007911799", "", "2011082600" ], "abstract": [ "In this paper, we present an experiment- and simulation-based study to evaluate the use of full duplex (FD) as a potential mode in practical IEEE 802.11 networks. To enable the study, we designed a 20-MHz multiantenna orthogonal frequency-division-multiplexing (OFDM) FD physical layer and an FD media access control (MAC) protocol, which is backward compatible with current 802.11. Our extensive over-the-air experiments, simulations, and analysis demonstrate the following two results. First, the use of multiple antennas at the physical layer leads to a higher ergodic throughput than its hardware-equivalent multiantenna half-duplex (HD) counterparts for SNRs above the median SNR encountered in practical WiFi deployments. Second, the proposed MAC translates the physical layer rate gain into near doubling of throughput for multinode single-AP networks. The two results allow us to conclude that there are potentially significant benefits gained from including an FD mode in future WiFi standards.", "In cognitive radio, spectrum sensing is used to find the white spectrum or protect the primary user from interference caused by the secondary user (SU). There are two conventional spectrum sensing approaches: quiet and active. However, these conventional approaches have several problems. In quiet sensing, the quiet period degrades the SU capacity. With active sensing, the SU capacity is also degraded by the need for additional resource consumption and the mismatch in feedback information. In order to mitigate these problems, the structure of simultaneous PU sensing and data transmission is introduced. This structure is equipped with antenna isolation and self-interference cancellation in which the communication and the sensing radios are already assumed to be significantly isolated. This approach is designed so that the SU transmitter can sense PU signals and transmit data signals at the same time by dividing its spatial resources. Expanding on this work, we propose a concept of \"TranSensing\" which adaptively uses spatial resource according to the surrounding environments. To effectively use TranSensing, we propose a two-stage algorithm (TSA). Finally, the impact of residual interference on TranSensing is investigated. Simulation results show that TranSensing with TSA enhances the SU capacity over the conventional quiet or active sensing.", "", "Cognitive radios (CRs) need to continuously monitor the availability of unoccupied spectrum. Prior work on spectrum sensing mainly focused on time-slotted schemes where sensing and communication take place on different time periods in the same frequency. This however leads to a) limited CR throughput as data transmissions need to be interrupted for the sensing task, and b) unreliable detection performance since sensing happens in specific confined time durations. The paper describes the basic design challenges and hardware requirements that restrain CRs from simultaneously and continuously sensing the spectrum while transmitting in the same frequency band. The paper then describes a novel approach based on spatial filtering that promises to empower CRs with concurrent transmission and sensing capabilities. The idea is to equip the CR with redundant transmit antennas for forming an adaptive spatial filter that selectively nulls the transmit signal in the sensing direction. By doing so, a wideband isolation level of 60 dB is obtained by the antenna system. Finally, by following the spatial filtering stage with active power cancellation in the radio-frequency stage and in the baseband stage, a total isolation in excess of a 100 dB required for enabling concurrent communication and sensing can be obtained." ] }
1609.07711
2526925007
A multi-user cognitive (secondary) radio system is considered, where the spatial multiplexing mode of operation is implemented amongst the nodes, under the presence of multiple primary transmissions. The secondary receiver carries out minimum mean-squared error (MMSE) detection to effectively decode the secondary data streams, while it performs spectrum sensing at the remaining signal to capture the presence of primary activity or not. New analytical closed-form expressions regarding some important system measures are obtained, namely, the outage and detection probabilities; the transmission power of the secondary nodes; the probability of unexpected interference at the primary nodes; blue and the detection efficiency with the aid of the area under the receive operating characteristics curve . The realistic scenarios of channel fading time variation and channel estimation errors are encountered for the derived results. Finally, the enclosed numerical results verify the accuracy of the proposed framework, while some useful engineering insights are also revealed, such as the key role of the detection accuracy to the overall performance and the impact of transmission power from the secondary nodes to the primary system.
In addition, the concept of simultaneous data reception and spectrum sensing for single-antenna nodes was studied in @cite_25 @cite_14 , while for multiple-antenna nodes in @cite_3 . However, these works used the central limit theorem to approximate the total received signal as a Gaussian input (invoking the constraint of sufficiently large amount of received samples), whereas they provided only semi-analytical and or simulation results with respect to the system performance.
{ "cite_N": [ "@cite_14", "@cite_25", "@cite_3" ], "mid": [ "2073686051", "2001603994", "2148213954" ], "abstract": [ "", "The cognitive radio (CR) systems usually arrange for the quiet period to detect the primary user (PU) effectively. Since all CR users do not transmit any data during quiet period, the interference caused by other CR users can be prevented in the channel sensing for PU detection. Even though the quiet period improves the PU detection performance, it degrades the channel utilization of CR system. To cope with this problem, we propose a channel sensing scheme without quiet period, which is based on the pilot cancellation, and analyze its performance. The numerical results show that the proposed scheme highly outperforms the existing PU detection schemes.", "We consider the problem of sensing in the presence of a desired signal in the context of future 3GPP LTE-A based cognitive cellular systems employing multiple-input multiple-output (MIMO) transmission. Energy detection (ED) based on equal gain combining and beamforming are investigated. Receive beamformers for energy detection (ED) are designed according to the Neyman-Pearson criterion to maximize the probability of detection for a given probability of false alarm. Suitable suboptimum solutions to the maximization problem with a good tradeoff between performance and complexity are identified. Furthermore, we also formulate the likelihood ratio test (LRT) for this scenario. Performance simulations indicate that a significant performance gain is achieved in ED if the receive beamformer is chosen properly." ] }
1609.07766
2524038481
Given @math intervals on a line @math , we consider the problem of moving these intervals on @math such that no two intervals overlap and the maximum moving distance of the intervals is minimized. The difficulty for solving the problem lies in determining the order of the intervals in an optimal solution. By interesting observations, we show that it is sufficient to consider at most @math "candidate" lists of ordered intervals. Further, although explicitly maintaining these lists takes @math time and space, by more observations and a pruning technique, we present an algorithm that can compute an optimal solution in @math time and @math space. We also prove an @math time lower bound for solving the problem, which implies the optimality of our algorithm.
Many interval problems have been used to model scheduling problems. We give a few examples. Given @math jobs, each job requests a time interval to use a machine. Suppose there is only one machine and the goal is to find a maximum number of jobs whose requested time intervals do not have any overlap (so that they can use the machine). The problem can be solved in @math time by an easy greedy algorithm @cite_0 . Another related problem is to find a minimum number of machines such that all jobs can be completed @cite_0 . @cite_19 studied a scheduling problem, which is essentially the following problem. Given @math intervals on a line, determine whether it is possible to find a unit-length sub-interval in each input interval, such that no two sub-intervals overlap. An @math time algorithm was given in @cite_19 for it. An optimization version of the problem was also studied @cite_20 @cite_11 , where the goal is to find a maximum number of intervals that contain non-overlapping unit-length sub-intervals. Other scheduling problems on intervals have also been considered, e.g., see @cite_4 @cite_19 @cite_0 @cite_3 @cite_12 @cite_5 @cite_7 .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_3", "@cite_0", "@cite_19", "@cite_5", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "1923579438", "2070961414", "2025231649", "", "2055849951", "2161635161", "1501934796", "2047204419", "2079322303" ], "abstract": [ "The main aim of this note is to show that a polynomial-time algorithm for the scheduling problem 1|r j ; p j = p| ∑ Uj given by Carlier in (1981) is incorrect. In this problem we are given n jobs with release times and deadlines. All jobs have the same processing time p. The objective is to find a non-preemptive schedule that maximizes the number of jobs completed by their deadlines. The feasibility version of this problem, where we ask whether all jobs can meet their deadlines, has been studied thoroughly. Polynomial-time algorithms for this version were first found, independently, by Simons (1978) and Carlier (1981). A faster algorithm, with running time O(n log n), was subsequently given by (1981). The elegant feasibility algorithm of Carlier (1981) is based on dynamic programming and it processes jobs from left to right on the time-axis. For each time t, it constructs a partial schedule with jobs that complete at or before time t. Carlier also attempted to apply the same technique to design a polynomial-time algorithm for the maximization version, 1|r j ; p j = p| ∑ Uj , and claimed an O(n3 log n)-time algorithm. His result is now widely cited in the literature. We show, however, that this algorithm is not correct, by giving an instance on which it produces a sub-optimal schedule. Our counter-example can be, in fact, extended to support a broader claim, namely that even the general approach from Carlier (1981) cannot yield a polynomial-time algorithm. By this general approach we mean a class of algorithms that processes the input from left to right and make decisions based on the deadline ordering, and not their exact values. The question remains as to how efficiently can we solve the scheduling problem 1|r j ; p j = p| ∑ Uj . Baptiste (1999) gave an O(n7)-time algorithm for the more general version of this problem where jobs have weights. We show how to modify his algorithm to obtain a faster, O(n5)-time algorithm for the non-weighted case. These last two results are discussed only briefly in this note. The complete proofs can be found in the full version of this paper, see (2004).", "We consider the problem of scheduling jobs with given release times and due dates on a single machine to minimize the maximum job lateness. It is NP-hard and remains such if the maximal job processing time is unrestricted and there is no constant bound on the difference between any job release times. We give a polynomial-time solution of the version in which the maximal job processing time and the differences between the job release times are bounded by a constant, which are restrictions that naturally arise in practice. Our algorithm reveals the inherent structure of the problem and also gives conditions when it is able to find an optimal solution unconditionally.", "", "", "The basic problem considered is that of scheduling n unit-time tasks, with arbitrary release times and deadlines, so as to minimize the maximum task completion time. Previous work has shown that this problem can be solved rather easily when all release times are integers. We are concerned with the general case in which noninteger release times are allowed, a generalization that considerably increases the difficulty of the problem even for only a single processor. Our results are for the one-processor case, where we provide an @math algorithm based on the concept of “forbidden regions”.", "", "The flameproof styrene polymer foam in accordance with this invention is obtained by melting a granular mixture of styrene polymer and aluminum hydroxide having a grain size of 20-100 mu m and a specific surface of below 1 m2 g in an extruder, incorporating a physical expanding agent, and homogenizing and extruding the mixture.", "We consider the following scheduling problem. The input is a set of jobs with equal processing times, where each job is specified by its release time and deadline. The goal is to determine a single-processor nonpreemptive schedule that maximizes the number of completed jobs. In the online version, each job arrives at its release time. We give two online algorithms with competitive ratios below @math and show several lower bounds on the competitive ratios. First, we give a barely random @math -competitive algorithm that uses only one random bit. We also show a lower bound of @math on the competitive ratio of barely random algorithms that randomly choose one of two deterministic algorithms. If the two algorithms are selected with equal probability, we can further improve the bound to @math . Second, we give a deterministic @math -competitive algorithm in the model that allows restarts, and we show that in this model the ratio @math is optimal. For randomized algorithms with restarts we show a lower bound of @math .", "We study inherent structural properties of a strongly NP-hard problem of scheduling @math jobs with release times and due dates on a single machine to minimize the number of late jobs. Our study leads to two polynomial-time algorithms. The first algorithm with the time complexity @math solves the problem if during its execution no job with some special property occurs. The second algorithm solves the version of the problem when all jobs have the same length. The time complexity of the latter algorithm is @math , which is an improvement over the earlier known algorithm with the time complexity @math ." ] }
1609.07766
2524038481
Given @math intervals on a line @math , we consider the problem of moving these intervals on @math such that no two intervals overlap and the maximum moving distance of the intervals is minimized. The difficulty for solving the problem lies in determining the order of the intervals in an optimal solution. By interesting observations, we show that it is sufficient to consider at most @math "candidate" lists of ordered intervals. Further, although explicitly maintaining these lists takes @math time and space, by more observations and a pruning technique, we present an algorithm that can compute an optimal solution in @math time and @math space. We also prove an @math time lower bound for solving the problem, which implies the optimality of our algorithm.
Many problems on wireless sensor networks are also modeled as interval problems. For example, a mobile sensor barrier coverage problem can be modeled as the following interval problem. Given on a line @math intervals (each interval is the region covered by a sensor at the center of the interval) and another segment @math (called barrier''), the goal is to move the intervals such that the union of the intervals fully covers @math and the maximum moving distance of all intervals is minimized. If all intervals have the same length, @cite_8 solved the problem in @math time and later @cite_18 improved it to @math time. If intervals have different lengths, @cite_18 solved the problem in @math time. The min-sum version of the problem has also been considered. If intervals have the same length, @cite_21 gave an @math time algorithm, and Andrews and Wang @cite_15 solved the problem in @math time. If intervals have different lengths, then the problem becomes NP-hard @cite_18 . Refer to @cite_9 @cite_13 @cite_17 @cite_16 @cite_14 @cite_1 for other interval problems on mobile sensor barrier coverage.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_9", "@cite_21", "@cite_1", "@cite_15", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "1988777116", "", "", "1755547434", "1530465016", "2125242971", "2949632354", "139009818", "2001156795", "1673927286" ], "abstract": [ "In this paper, we study the problem of moving @math n sensors on a line to form a barrier coverage of a specified segment of the line such that the maximum moving distance of the sensors is minimized. Previously, it was an open question whether this problem on sensors with arbitrary sensing ranges is solvable in polynomial time. We settle this open question positively by giving an @math O(n2logn) time algorithm. For the special case when all sensors have the same-size sensing range, the previously best solution takes @math O(n2) time. We present an @math O(nlogn) time algorithm for this case; further, if all sensors are initially located on the coverage segment, our algorithm takes @math O(n) time. Also, we extend our techniques to the cycle version of the problem where the barrier coverage is for a simple cycle and the sensors are allowed to move only along the cycle. For sensors with the same-size sensing range, we solve the cycle version in @math O(n) time, improving the previously best @math O(n2) time solution.", "", "", "Sensor networks are ubiquitously used for detection and tracking and as a result covering is one of the main tasks of such networks. We study the problem of maximizing the coverage lifetime of a barrier by mobile sensors with limited battery powers, where the coverage lifetime is the time until there is a breakdown in coverage due to the death of a sensor. Sensors are first deployed and then coverage commences. Energy is consumed in proportion to the distance traveled for mobility, while for coverage, energy is consumed in direct proportion to the radius of the sensor raised to a constant exponent. We study two variants which are distinguished by whether the sensing radii are given as part of the input or can be optimized, the fixed radii problem and the variable radii problem. We design parametric search algorithms for both problems for the case where the final order of the sensors is predetermined and for the case where sensors are initially located at barrier endpoints. In contrast, we show that the variable radii problem is strongly NP-hard and provide hardness of approximation results for fixed radii for the case where all the sensors are initially co-located at an internal point of the barrier.", "A set of sensors establishes barrier coverage of a given line segment if every point of the segment is within the sensing range of a sensor. Given a line segment I, n mobile sensors in arbitrary initial positions on the line (not necessarily inside I) and the sensing ranges of the sensors, we are interested in finding final positions of sensors which establish a barrier coverage of I so that the sum of the distances traveled by all sensors from initial to final positions is minimized. It is shown that the problem is NP complete even to approximate up to constant factor when the sensors may have different sensing ranges. When the sensors have an identical sensing range we give several efficient algorithms to calculate the final destinations so that the sensors either establish a barrier coverage or maximize the coverage of the segment if complete coverage is not feasible while at the same time the sum of the distances traveled by all sensors is minimized. Some open problems are also mentioned.", "We study the problem of achieving maximum barrier coverage by sensors on a barrier modeled by a line segment, by moving the minimum possible number of sensors, initially placed at arbitrary positions on the line containing the barrier. We consider several cases based on whether or not complete coverage is possible, and whether non-contiguous coverage is allowed in the case when complete coverage is impossible. When the sensors have unequal transmission ranges, we show that the problem of finding a minimum-sized subset of sensors to move in order to achieve maximum contiguous or non-contiguous coverage on a finite line segment barrier is NP-complete. In contrast, if the sensors all have the same range, we give efficient algorithms to achieve maximum contiguous as well as non-contiguous coverage. For some cases, we reduce the problem to finding a maximum-hop path of a certain minimum (maximum) weight on a related graph, and solve it using dynamic programming.", "We consider an interval coverage problem. Given @math intervals of the same length on a line @math and a line segment @math on @math , we want to move the intervals along @math such that every point of @math is covered by at least one interval and the sum of the moving distances of all intervals is minimized. As a basic geometry problem, it has applications in mobile sensor barrier coverage in wireless sensor networks. The previous work solved the problem in @math time. In this paper, by discovering many interesting observations and developing new algorithmic techniques, we present an @math time algorithm. We also show an @math time lower bound for this problem, which implies the optimality of our algorithm.", "One of the most fundamental tasks of wireless sensor networks is to provide coverage of the deployment region. In this paper, we study the coverage of a line segment with a set of wireless sensors with adjustable coverage ranges. Each coverage range of a sensor is an interval centered at that sensor whose length is decided by the power the sensor chooses. The objective is to find a range assignment with the minimum cost. There are two variants of the optimization problem. In the discrete variant, each sensor can only choose from a finite set of powers while in the continuous variant, each sensor can choose power from a given interval. For the discrete variant of the problem, we present a polynomial-time exact algorithm. For the continuous variant of the problem, we develop constant-approximation algorithms when the cost for all sensors is proportional to rk for some constant k ≥ 1, where r is the covering radius corresponding to the chosen power. Specifically, if k = 1, we give a simple 1.25-approximation algorithm and a fully polynomial-time approximation scheme (FPTAS); if k > 1, we give a simple 2-approximation algorithm.", "Intrusion detection, area coverage and border surveillance are important applications of wireless sensor networks today. They can be (and are being) used to monitor large unprotected areas so as to detect intruders as they cross a border or as they penetrate a protected area. We consider the problem of how to optimally move mobile sensors to the fence (perimeter) of a region delimited by a simple polygon in order to detect intruders from either entering its interior or exiting from it. We discuss several related issues and problems, propose two models, provide algorithms and analyze their optimal mobility behavior.", "Given n points in a circular region C in the plane, we study the problems of moving the n points to the boundary of G to form a regular n-gon such that the maximum (min-max) or the sum (min-sum) of the Euclidean distances traveled by the points is minimized. These problems have applications, e.g., in mobile sensor barrier coverage of wireless sensor networks. The min-max problem further has two versions: the decision version and the optimization version. For the min-max problem, we present an O(nlog2n) time algorithm for the decision version and an O(nlog3n) time algorithm for the optimization version. The previously best algorithms for the two problem versions take O(n3.5) time and O(n3.5logn) time, respectively. For the min-sum problem we show that a special case with all points initially lying on the boundary of the circular region can be solved in O(n2) time, improving a previous O(n4) time solution. For the general min-sum problem, we present a 3-approximation O(n2) time algorithm. In addition, a by-product of our techniques is an algorithm for dynamically maintaining the maximum matching of a circular convex bipartite graph; our algorithm can handle each vertex insertion or deletion on the graph in O(log2n) time. This result may be interesting in its own right." ] }
1609.07472
2524982171
We propose a neural network approach to price EU call options that significantly outperforms some existing pricing models and comes with guarantees that its predictions are economically reasonable. To achieve this, we introduce a class of gated neural networks that automatically learn to divide-and-conquer the problem space for robust and accurate pricing. We then derive instantiations of these networks that are 'rational by design' in terms of naturally encoding a valid call option surface that enforces no arbitrage principles. This integration of human insight within data-driven learning provides significantly better generalisation in pricing performance due to the encoded inductive bias in the learning, guarantees sanity in the model's predictions, and provides econometrically useful byproduct such as risk neutral density.
Asset pricing is a very active research area in finance and mathematical finance. The oldest and most famous model for option pricing is Black--Scholes @cite_3 . The biggest criticism of this model is its incompatibility with the behaviour in real markets due to its constant volatility assumption. The volatility smile exists due to the fact that real-world distributions are often fat-tailed and asymmetric. Stochastic volatility models, (e.g. @cite_17 ), aim to model the above smile behaviour through allowing randomness of volatility, compensated for by introducing random volatility process @cite_17 . Another stream of research suggests including jumps which represent rare events in the underlying process to alleviate the smile problem. These models are called Levy models @cite_19 @cite_4 @cite_24 @cite_15 @cite_23 and are able to generate volatility skew or smile. A comprehensive theoretical explanation of asset pricing models can be found in @cite_20 . This paper tackles the skew smile problem in a more data-driven way: it learns from market prices so that a model that fits the market prices well is expected to carry the same smile structure.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_24", "@cite_19", "@cite_23", "@cite_15", "@cite_20", "@cite_17" ], "mid": [ "2133672408", "2077791698", "", "2151065060", "2138540172", "2123446302", "", "2064978316" ], "abstract": [ "Brownian motion and normal distribution have been widely used in the Black--Scholes option-pricing framework to model the return of assets. However, two puzzles emerge from many empirical investigations: the leptokurtic feature that the return distribution of assets may have a higher peak and two (asymmetric) heavier tails than those of the normal distribution, and an empirical phenomenon called \"volatility smile\" in option markets. To incorporate both of them and to strike a balance between reality and tractability, this paper proposes, for the purpose of option pricing, a double exponential jump-diffusion model. In particular, the model is simple enough to produce analytical solutions for a variety of option-pricing problems, including call and put options, interest rate derivatives, and path-dependent options. Equilibrium analysis and a psychological interpretation of the model are also presented.", "If options are correctly priced in the market, it should not be possible to make sure profits by creating portfolios of long and short positions in options and their underlying stocks. Using this principle, a theoretical valuation formula for options is derived. Since almost all corporate liabilities can be viewed as combinations of options, the formula and the analysis that led to it are also applicable to corporate liabilities such as common stock, corporate bonds, and warrants. In particular, the formula can be used to derive the discount that should be applied to a corporate bond because of the possibility of default.", "", "The validity of the classic Black-Scholes option pricing formula depends on the capability of investors to follow a dynamic portfolio strategy in the stock that replicates the payoff structure to the option. The critical assumption required for such a strategy to be feasible, is that the underlying stock return dynamics can be described by a stochastic process with a continuous sample path. In this paper, an option pricing formula is derived for the more-general case when the underlying stock returns are generated by a mixture of both continuous and jump processes. The derived formula has most of the attractive features of the original Black-Scholes formula in that it does not depend on investor preferences or knowledge of the expected return on the underlying stock. Moreover, the same analysis applied to the options can be extended to the pricing of corporate liabilities.", "We investigate the importance of diffusion and jumps in a new model for asset returns. In contrast to standard models, we allow for jump components displaying finite or infinite activity and variation. Empirical investigations of time series indicate that index dynamics are devoid of a diffusion component, which may be present in the dynamics of individual stocks. This leads to the conjecture, confirmed on options data, that the risk-neutral process should be free of a diffusion component. We conclude that the statistical and risk-neutral processes for equity prices are pure jump processes of infinite activity and finite variation.", "The normal inverse Gaussian distribution is defined as a variance-mean mixture of a normal distribution with the inverse Gaussian as the mixing distribution. The distribution determines an homogeneous Levy process, and this process is representable through subordination of Brownian motion by the inverse Gaussian process. The canonical, Levy type, decomposition of the process is determined. As a preparation for developments in the latter part of the paper the connection of the normal inverse Gaussian distribution to the classes of generalized hyperbolic and inverse Gaussian distributions is briefly reviewed. Then a discussion is begun of the potential of the normal inverse Gaussian distribution and Levy process for modelling and analysing statistical data, with particular reference to extensive sets of observations from turbulence and from finance. These areas of application imply a need for extending the inverse Gaussian Levy process so as to accommodate certain, frequently observed, temporal dependence structures. Some extensions, of the stochastic volatility type, are constructed via an observation-driven approach to state space modelling. At the end of the paper generalizations to multivariate settings are indicated.", "", "I use a new technique to derive a closed-form solution for the price of a European call option on an asset with stochastic volatility. The model allows arbitrary correlation between volatility and spot-asset returns. I introduce stochastic interest rates and show how to apply the model to bond options and foreign currency options. Simulations show that correlation between volatility and the spot asset's price is important for explaining return skewness and strike-price biases in the Black-Scholes (1973) model. The solution technique is based on characteristic functions and can be applied to other problems. Article published by Oxford University Press on behalf of the Society for Financial Studies in its journal, The Review of Financial Studies." ] }
1609.07472
2524982171
We propose a neural network approach to price EU call options that significantly outperforms some existing pricing models and comes with guarantees that its predictions are economically reasonable. To achieve this, we introduce a class of gated neural networks that automatically learn to divide-and-conquer the problem space for robust and accurate pricing. We then derive instantiations of these networks that are 'rational by design' in terms of naturally encoding a valid call option surface that enforces no arbitrage principles. This integration of human insight within data-driven learning provides significantly better generalisation in pricing performance due to the encoded inductive bias in the learning, guarantees sanity in the model's predictions, and provides econometrically useful byproduct such as risk neutral density.
There are many methods for implementing option pricing models including: Fourier-based @cite_0 , Tree-based @cite_13 , Finite difference @cite_26 and Monte Carlo methods @cite_11 . In this paper, we employ the fractional FFT method @cite_5 for our benchmark option pricing models as their characteristic functions are known.
{ "cite_N": [ "@cite_26", "@cite_0", "@cite_5", "@cite_13", "@cite_11" ], "mid": [ "1980997475", "1480459000", "2061171222", "", "1982039177" ], "abstract": [ "Abstract The option pricing model developed by Black and Scholes and extended by Merton gives rise to partial differential equations governing the value of an option. When the underlying stock pays no dividends – and in some very restrictive cases when it does – a closed form solution to the differential equation subject to the appropriate boundary conditions, has been obtained. But, in some relevant cases such as the one in which the stock pays discrete dividends, no closed form solution has been found. This paper shows how to solve these equations by numerical methods. In addition, the optimal strategy for exercising American options is derived. A numerical illustration of the procedure is also presented.", "This paper shows how the fast Fourier Transform may be used to value options when the characteristic function of the return is known analytically.", "An efficient method for the calculation of the interactions of a 2' factorial ex- periment was introduced by Yates and is widely known by his name. The generaliza- tion to 3' was given by (1). Good (2) generalized these methods and gave elegant algorithms for which one class of applications is the calculation of Fourier series. In their full generality, Good's methods are applicable to certain problems in which one must multiply an N-vector by an N X N matrix which can be factored into m sparse matrices, where m is proportional to log N. This results inma procedure requiring a number of operations proportional to N log N rather than N2. These methods are applied here to the calculation of complex Fourier series. They are useful in situations where the number of data points is, or can be chosen to be, a highly composite number. The algorithm is here derived and presented in a rather different form. Attention is given to the choice of N. It is also shown how special advantage can be obtained in the use of a binary computer with N = 2' and how the entire calculation can be performed within the array of N data storage locations used for the given Fourier coefficients. Consider the problem of calculating the complex Fourier series N-1 (1) X(j) = EA(k)-Wjk, j = 0 1, * ,N- 1, k=0", "", "This paper develops a Monte Carlo simulation method for solving option valuation problems. The method simulates the process generating the returns on the underlying asset and invokes the risk neutrality assumption to derive the value of the option. Techniques for improving the efficiency of the method are introduced. Some numerical examples are given to illustrate the procedure and additional applications are suggested." ] }
1609.07472
2524982171
We propose a neural network approach to price EU call options that significantly outperforms some existing pricing models and comes with guarantees that its predictions are economically reasonable. To achieve this, we introduce a class of gated neural networks that automatically learn to divide-and-conquer the problem space for robust and accurate pricing. We then derive instantiations of these networks that are 'rational by design' in terms of naturally encoding a valid call option surface that enforces no arbitrage principles. This integration of human insight within data-driven learning provides significantly better generalisation in pricing performance due to the encoded inductive bias in the learning, guarantees sanity in the model's predictions, and provides econometrically useful byproduct such as risk neutral density.
There is a long history of computer scientists trying to solve option pricing using neural networks @cite_10 . Option pricing can be seen a standard regression task for which there are many established methods and neural networks (rebranded ) are one of the most popular choices.
{ "cite_N": [ "@cite_10" ], "mid": [ "2058076138" ], "abstract": [ "A neural network model that processes financial input data is developed to estimate the market price of options at closing. The network's ability to estimate closing prices is compared to the Black-Scholes model, the most widely used model for the pricing of options. Comparisons reveal that the mean squared error for the neural network is less than that of the Black-Scholes model in about half of the cases examined. The differences and similarities in the two modeling approaches are discussed. The neural network, which uses the same financial data as the Black-Scholes model, requires no distribution assumptions and learns the relationships between the financial input data and the option price from the historical data. The option-valuation equilibrium model of Black-Scholes determines option prices under the assumptions that prices follow a continuous time path and that the instantaneous volatility is nonstochastic." ] }
1609.07451
2949954724
The task of AMR-to-text generation is to generate grammatical text that sustains the semantic meaning for a given AMR graph. We at- tack the task by first partitioning the AMR graph into smaller fragments, and then generating the translation for each fragment, before finally deciding the order by solving an asymmetric generalized traveling salesman problem (AGTSP). A Maximum Entropy classifier is trained to estimate the traveling costs, and a TSP solver is used to find the optimized solution. The final model reports a BLEU score of 22.44 on the SemEval-2016 Task8 dataset.
Our work is related to prior work on AMR @cite_13 . There has been a list of work on AMR parsing @cite_19 @cite_17 @cite_18 @cite_14 @cite_11 @cite_0 , which predicts the AMR structures for a given sentence. On the reverse direction, and our work here study sentence generation from a given AMR graph. Different from who map a input AMR graph into a tree before linearization, we apply synchronous rules consisting of AMR graph fragments and text to directly transfer a AMR graph into a sentence. In addition to AMR parsing and generation, there has also been work using AMR as a semantic representation in machine translation @cite_4 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_11", "@cite_4", "@cite_0", "@cite_19", "@cite_13", "@cite_17" ], "mid": [ "2250808720", "2250777616", "2251823395", "1829822087", "2250623140", "2149837184", "2252123671", "2296308987" ], "abstract": [ "This paper presents a synchronous-graphgrammar-based approach for string-toAMR parsing. We apply Markov Chain Monte Carlo (MCMC) algorithms to learn Synchronous Hyperedge Replacement Grammar (SHRG) rules from a forest that represents likely derivations consistent with a fixed string-to-graph alignment. We make an analogy of string-toAMR parsing to the task of phrase-based machine translation and come up with an efficient algorithm to learn graph grammars from string-graph pairs. We propose an effective approximation strategy to resolve the complexity issue of graph compositions. We also show some useful strategies to overcome existing problems in an SHRG-based parser and present preliminary results of a graph-grammar-based approach.", "In this demonstration, we will present our online parser that allows users to submit any sentence and obtain an analysis following the specification of AMR (, 2014) to a large extent. This AMR analysis is generated by a small set of rules that convert a native Logical Form analysis provided by a preexisting parser (see Vanderwende, 2015) into the AMR format. While we demonstrate the performance of our AMR parser on data sets annotated by the LDC, we will focus attention in the demo on the following two areas: 1) we will make available AMR annotations for the data sets that were used to develop our parser, to serve as a supplement to the LDC data sets, and 2) we will demonstrate AMR parsers for German, French, Spanish and Japanese that make use of the same small set of LF-to-AMR conversion rules.", "We present a parser for Abstract Meaning Representation (AMR). We treat Englishto-AMR conversion within the framework of string-to-tree, syntax-based machine translation (SBMT). To make this work, we transform the AMR structure into a form suitable for the mechanics of SBMT and useful for modeling. We introduce an AMR-specific language model and add data and features drawn from semantic resources. Our resulting AMR parser significantly improves upon state-of-the-art results.", "We present an approach to semantics-based statistical machine translation that uses synchronous hyperedge replacement grammars to translate into and from graph-shaped intermediate meaning representations, to our knowledge the first work in NLP to make use of synchronous context free graph grammars. We present algorithms for each step of the semantics-based translation pipeline, including a novel graph-to-word alignment algorithm and two algorithms for synchronous grammar rule extraction. We investigate the influence of syntactic annotations on semantics-based translation by presenting two alternative rule extraction algorithms, one that requires only semantic annotations and another that additionally relies on syntactic annotations, and explore the effect of syntax and language bias in meaning representation structures by running experiments with two different meaning representations, one biased toward an English syntax-like structure and another that is language neutral. While preliminary work, these experiments show promise for semantically-informed machine translation.", "We propose a grammar induction technique for AMR semantic parsing. While previous grammar induction techniques were designed to re-learn a new parser for each target application, the recently annotated AMR Bank provides a unique opportunity to induce a single model for understanding broad-coverage newswire text and support a wide range of applications. We present a new model that combines CCG parsing to recover compositional aspects of meaning and a factor graph to model non-compositional phenomena, such as anaphoric dependencies. Our approach achieves 66.2 Smatch F1 score on the AMR bank, significantly outperforming the previous state of the art.", "Abstract Meaning Representation (AMR) is a semantic formalism for which a grow- ing set of annotated examples is avail- able. We introduce the first approach to parse sentences into this representa- tion, providing a strong baseline for fu- ture improvement. The method is based on a novel algorithm for finding a maxi- mum spanning, connected subgraph, em- bedded within a Lagrangian relaxation of an optimization problem that imposes lin- guistically inspired constraints. Our ap- proach is described in the general frame- work of structured prediction, allowing fu- ture incorporation of additional features and constraints, and may extend to other formalisms as well. Our open-source sys- tem, JAMR, is available at: http: github.com jflanigan jamr", "We describe Abstract Meaning Representation (AMR), a semantic representation language in which we are writing down the meanings of thousands of English sentences. We hope that a sembank of simple, whole-sentence semantic structures will spur new work in statistical natural language understanding and generation, like the Penn Treebank encouraged work on statistical parsing. This paper gives an overview of AMR and tools associated with it.", "We present a two-stage framework to parse a sentence into its Abstract Meaning Representation (AMR). We first use a dependency parser to generate a dependency tree for the sentence. In the second stage, we design a novel transition-based algorithm that transforms the dependency tree to an AMR graph. There are several advantages with this approach. First, the dependency parser can be trained on a training set much larger than the training set for the tree-to-graph algorithm, resulting in a more accurate AMR parser overall. Our parser yields an improvement of 5 absolute in F-measure over the best previous result. Second, the actions that we design are linguistically intuitive and capture the regularities in the mapping between the dependency structure and the AMR of a sentence. Third, our parser runs in nearly linear time in practice in spite of a worst-case complexity ofO(n 2 )." ] }
1609.07451
2949954724
The task of AMR-to-text generation is to generate grammatical text that sustains the semantic meaning for a given AMR graph. We at- tack the task by first partitioning the AMR graph into smaller fragments, and then generating the translation for each fragment, before finally deciding the order by solving an asymmetric generalized traveling salesman problem (AGTSP). A Maximum Entropy classifier is trained to estimate the traveling costs, and a TSP solver is used to find the optimized solution. The final model reports a BLEU score of 22.44 on the SemEval-2016 Task8 dataset.
Our work also belongs to the task of text generation @cite_3 . There has been work on generating natural language text from a bag of words @cite_1 @cite_15 , surface syntactic trees @cite_6 @cite_5 , deep semantic graphs @cite_7 and logical forms @cite_16 @cite_9 . We are among the first to investigate generation from AMR, which is a different type of semantic representation.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_1", "@cite_6", "@cite_3", "@cite_5", "@cite_15", "@cite_16" ], "mid": [ "173713748", "2080656168", "1999383775", "201441355", "1985610876", "2202795099", "1908479511", "1564002882" ], "abstract": [ "Most of the known stochastic sentence generators use syntactically annotated corpora, performing the projection to the surface in one stage. However, in full-fledged text generation, sentence realization usually starts from semantic (predicate-argument) structures. To be able to deal with semantic structures, stochastic generators require semantically annotated, or, even better, multilevel annotated corpora. Only then can they deal with such crucial generation issues as sentence planning, linearization and morphologization. Multilevel annotated corpora are increasingly available for multiple languages. We take advantage of them and propose a multilingual deep stochastic sentence realizer that mirrors the state-of-the-art research in semantic parsing. The realizer uses an SVM learning algorithm. For each pair of adjacent levels of annotation, a separate decoder is defined. So far, we evaluated the realizer for Chinese, English, German, and Spanish.", "This paper shows that discriminative reranking with an averaged perceptron model yields substantial improvements in realization quality with CCG. The paper confirms the utility of including language model log probabilities as features in the model, which prior work on discriminative training with log linear models for HPSG realization had called into question. The perceptron model allows the combination of multiple n-gram models to be optimized and then augmented with both syntactic features and discriminative n-gram features. The full model yields a state-of-the-art BLEU score of 0.8506 on Section 23 of the CCGbank, to our knowledge the best score reported to date using a reversible, corpus-engineered grammar.", "Abstract-like text summarisation requires a means of producing novel summary sentences. In order to improve the grammaticality of the generated sentence, we model a global (sentence) level syntactic structure. We couch statistical sentence generation as a spanning tree problem in order to search for the best dependency tree spanning a set of chosen words. We also introduce a new search algorithm for this task that models argument satisfaction to improve the linguistic validity of the generated tree. We treat the allocation of modifiers to heads as a weighted bipartite graph matching (or assignment) problem, a well studied problem in graph theory. Using BLEU to measure performance on a string regeneration task, we found an improvement, illustrating the benefit of the spanning tree approach armed with an argument satisfaction model.", "We present partial-tree linearization, a generalized word ordering (i.e. ordering a set of input words into a grammatical and fluent sentence) task for text-to-text applications. Recent studies of word ordering can be categorized into either abstract word ordering (no input syntax except for POS) or tree linearization (input words are associated with a full unordered syntax tree). Partial-tree linearization covers the whole spectrum of input between these two extremes. By allowing POS and dependency relations to be associated with any subset of input words, partial-tree linearization is more practical for a dependency-based NLG pipeline, such as transfer-based MT and abstractive text summarization. In addition, a partial-tree linearizer can also perform abstract word ordering and full-tree linearization. Our system achieves the best published results on standard PTB evaluations of these tasks.", "In this article, we give an overview of Natural Language Generation (NLG) from an applied system-building perspective. The article includes a discussion of when NLG techniques should be used; suggestions for carrying out requirements analyses; and a description of the basic NLG tasks of content determination, discourse planning, sentence aggregation, lexicalization, referring expression generation, and linguistic realisation. Throughout, the emphasis is on established techniques that can be used to build simple but practical working systems now. We also provide pointers to techniques in the literature that are appropriate for more complicated scenarios.", "There has been growing interest in stochastic methods to natural language generation (NLG). While most NLG pipelines separate morphological generation and syntactic linearization, the two tasks are closely related. In this paper, we study joint morphological generation and linearization, making use of word order and inflections information for both tasks and reducing error propagation. Experiments show that the joint method significantly outperforms a strong pipelined baseline (by 1.1 BLEU points). It also achieves the best reported result on the Generation Challenge 2011 shared task.", "Word ordering is a fundamental problem in text generation. In this article, we study word ordering using a syntax-based approach and a discriminative model. Two grammar formalisms are considered: Combinatory Categorial Grammar CCG and dependency grammar. Given the search for a likely string and syntactic analysis, the search space is massive, making discriminative training challenging. We develop a learning-guided search framework, based on best-first search, and investigate several alternative training algorithms. The framework we present is flexible in that it allows constraints to be imposed on output word orders. To demonstrate this flexibility, a variety of input conditions are considered. First, we investigate a \"pure\" word-ordering task in which the input is a multi-set of words, and the task is to order them into a grammatical and fluent sentence. This task has been tackled previously, and we report improved performance over existing systems on a standard Wall Street Journal test set. Second, we tackle the same reordering problem, but with a variety of input conditions, from the bare case with no dependencies or POS tags specified, to the extreme case where all POS tags and unordered, unlabeled dependencies are provided as input and various conditions in between. When applied to the NLG 2011 shared task, our system gives competitive results compared with the best-performing systems, which provide a further demonstration of the practical utility of our system.", "We present a novel ensemble of six methods for improving the efficiency of chart realization. The methods are couched in the framework of Combinatory Categorial Grammar (CCG), but we conjecture that they can be adapted to related grammatical frameworks as well. The ensemble includes two new methods introduced here—feature-based licensing and instantiation of edges, and caching of category combinations—in addition to four previously introduced methods—index filtering, LF chunking, edge pruning based on n-gram scores, and anytime search. We compare the relative contributions of each method using two test grammars, and show that the methods work best in combination. Our evaluation also indicates that despite the exponential worst-case complexity of the basic algorithm, the methods together can constrain the realization problem sufficiently to meet the interactive needs of natural language dialogue systems." ] }
1609.07436
2951683119
A main problem in autonomous vehicles in general, and in UAV in particular, is the determination of the attitude angles. A novel method to estimate these angles using off-the-shelf components is presented. This paper introduces an AHRS based on the UKF using the TRIAD algorithm as the observation model. The performance of the method is assessed through simulations and compared to an AHRS based on the EKF . The paper presents field experiment results using a real fixed-wing UAV . The results show good real-time performance with low computational cost in a microcontroller.
As highlighted by @cite_23 it is important to base the Kalman filter on an accurate model. In our context, the models presented in the following section are well established in the literature and correspond with experimental results.
{ "cite_N": [ "@cite_23" ], "mid": [ "2115958120" ], "abstract": [ "A new adaptive Unscented Kalman Filter (UKF) algorithm for actuator failure estimation is proposed. The novel filter method with adaptability to statistical characteristic of noise is presented to improve the estimation accuracy of traditional UKF. The algorithm with the adaptability to statistical characteristic of noise, named Kalman Filter (KF) -based adaptive UKF, is proposed to improve the UKF performance. Such an adaptive mechanism is intended to compensate the lack of a prior knowledge. The asymptotic property of the adaptive UKF is discussed. The Actuator Healthy Coefficients (AHCs) is introduced to denote the actuator failure model while the adaptive UKF is employed for on-line estimation of both the flight states and the AHCs parameters of rotorcraft UAV (RUAV). Simulations are conducted using the model of SIA- Heli-90 RUAV of Shenyang Institute of Automation, CAS. The results are compared with those obtained by normal UKF to demonstrate the effectiveness and improvements of the adaptive UKF algorithm. Besides, we also compare this algorithm with the MIT-based one which we propose in previous research." ] }
1609.07436
2951683119
A main problem in autonomous vehicles in general, and in UAV in particular, is the determination of the attitude angles. A novel method to estimate these angles using off-the-shelf components is presented. This paper introduces an AHRS based on the UKF using the TRIAD algorithm as the observation model. The performance of the method is assessed through simulations and compared to an AHRS based on the EKF . The paper presents field experiment results using a real fixed-wing UAV . The results show good real-time performance with low computational cost in a microcontroller.
An extensive review of navigation systems is @cite_1 . It covers different algorithms including Kalman filters and the TRIAD algorithm. The TRIAD algorithm was introduced by Shuster and Oh in @cite_15 to measure the DCM in a spacecraft.
{ "cite_N": [ "@cite_15", "@cite_1" ], "mid": [ "2006863675", "2288376355" ], "abstract": [ "Two computationally efficient algorithms are presented for determining three-axis attitude from two or more vector observations. The first of these, the TRIAD algorithm, provides a deterministic (i.e., nonoptimal) solution for the attitude based on two vector observations. The second, the QUEST algorithm, is an optimal algorithm which determines the attitude that achieves the best weighted overlap of an arbitrary number of reference and observation vectors. Analytical expressions are given for the covariance matrices for the two algorithms using a fairly realistic model for the measurement errors. The mathematical relationship of the two algorithms and their relative merits are discussed and numerical examples are given. The advantage of computing the covariance matrix in the body frame rather than in the inertial frame (e.g., in terms of Euler angles) is emphasized. These results are valuable when a single-frame attitude must be computed frequently. They will also be useful to the mission analyst or spacecraft engineer for the evaluation of launch-window constraints or of attitude accuracies for different attitude sensor configurations.", "Significant developments and technical trends in the area of navigation systems are reviewed. In particular, the integration of the Global Positioning System (GPS) and Inertial Navigation System (INS) has been an important development in modern navigation. The review concentrates also on the analysis, investigation, assessment and performance evaluation of existing integrated navigation systems of accuracy, performance, low cost and all the issues that aid in optimizing their operating efficiency. The integration of GPS and INS has been successfully used in practice during the past decades. However, much of the work has focused on the use of a high accuracy Inertial Measurement Unit (IMU), which is an inertial sensors block without navigation solution output, and hence, this research area is also reviewed in this paper." ] }
1609.07436
2951683119
A main problem in autonomous vehicles in general, and in UAV in particular, is the determination of the attitude angles. A novel method to estimate these angles using off-the-shelf components is presented. This paper introduces an AHRS based on the UKF using the TRIAD algorithm as the observation model. The performance of the method is assessed through simulations and compared to an AHRS based on the EKF . The paper presents field experiment results using a real fixed-wing UAV . The results show good real-time performance with low computational cost in a microcontroller.
The use of IMU based in MEMS technology to estimate the attitude angles in the industry has been increasing in the recent years, like a fastening tool tracking system in @cite_21 .
{ "cite_N": [ "@cite_21" ], "mid": [ "2114380364" ], "abstract": [ "This paper utilizes an intelligent system which incorporates Kalman filters (KFs) and a fuzzy expert system to track the tip of a fastening tool and to identify the fastened bolt. This system employs one inertial measurement unit and one position sensor to determine the orientation and the center of mass location of the tool. KFs are used to estimate the orientation of the tool and the center of mass location of the tool. Although a KF is used for the orientation estimation, orientation error increases over time due to the integration of angular velocity error. Therefore, a methodology to correct the orientation error is required when the system is used for an extended period of time. This paper proposes a method to correct the tilt angle and orientation errors using a fuzzy expert system. When a tool fastens a bolt, the system identifies the fastened bolt using a fuzzy expert system. Through this bolt identification step, the 3-D orientation error of the tool is corrected by using the location and orientation of the fastened bolt and the position sensor outputs. Using the orientation correction method will, in turn, result in improved reliability in determining the tool tip location. The fastening tool tracking system was experimentally tested in a lab environment, and the results indicate that such a system can successfully identify the fastened bolts." ] }
1609.07436
2951683119
A main problem in autonomous vehicles in general, and in UAV in particular, is the determination of the attitude angles. A novel method to estimate these angles using off-the-shelf components is presented. This paper introduces an AHRS based on the UKF using the TRIAD algorithm as the observation model. The performance of the method is assessed through simulations and compared to an AHRS based on the EKF . The paper presents field experiment results using a real fixed-wing UAV . The results show good real-time performance with low computational cost in a microcontroller.
A well-known problem with gyrometers is bias. Therefore, different sensors have to be used to correct these biases. For instance, in @cite_6 they are corrected using three-axis accelerometers. In @cite_13 the use of eight accelerometers in a new configuration is proposed for measuring angular velocities in small UAV . Different approaches rely on magnetometers @cite_14 , to be able to estimate yaw angle in helicopters, or in GPS @cite_24 to estimate the position as well the attitude in fixed-wing UAV . Alternatively, other papers propose not to use gyrometers at all in conventional aircrafts, but several GPS receivers instead @cite_28 .
{ "cite_N": [ "@cite_14", "@cite_28", "@cite_6", "@cite_24", "@cite_13" ], "mid": [ "", "2107556327", "2016599477", "2025033566", "2062226586" ], "abstract": [ "", "Attitude determination systems that use inexpensive sensors and are based on computationally efficient and robust algorithms are indispensable for real-time vehicle navigation, guidance and control applications. This paper describes an attitude determination system that is based on two vector measurements of non-zero, non-colinear vectors. The algorithm is based on a quaternion formulation of Wahba's (1966) problem, whereby the error quaternion (q sub e ) becomes the observed state and can be cast into a standard linear measurement equation. Using the Earth's magnetic field and gravity as the two measured quantities, a low-cost attitude determination system is proposed. An iterated least-squares solution to the attitude determination problem is tested on simulated static cases, and shown to be globally convergent. A time-varying Kalman filter implementation of the same formulation is tested on simulated data and experimental data from a maneuvering aircraft. The time-varying Kalman filter implementation of this algorithm is exercised on simulated and real data collected from an inexpensive triad of accelerometers and magnetometers. The accelerometers in conjunction with the derivative of GPS velocity provided a measure of the gravitation field vector and the magnetometers measured the Earth's magnetic field vector. Tracking errors on experimental data are shown to be less than 1 degree mean and standard deviation of approximately 11 degrees in yaw, and 3 degrees in pitch and roll. Best case performance of the system during maneuvering is shown to improve standard deviations to approximately 3 degrees in yaw, and 1.5 degrees in pitch and roll.", "In this paper, a low-cost attitude estimation system is introduced. The system is developed with MEMS sensors including rate gyros and accelerometers. Composition and principle of the system are described. The rigid body kinematics is modeled with quaternion, such that eliminates attitude estimation singularities. The real-time Kalman filter is designed. Experiments were conducted in both static and dynamic conditions. The experimental results demonstrate that the algorithm and hardware are feasible and suitable for the application where critical accuracy and real-time requirements are needed.", "This paper presents a framework for the automation of a small UAV using a low cost sensor suite, MNAV, and an embedded computing platform, Stargate, which together provide a complete avionics package for aerial robotic applications. In order to provide a complete INS solution (i.e., attitude, velocity, position, and biases), an extended Kalman filter algorithm is developed and implemented in real-time. A devised control strategy utilizes multiple PID loops with a hierarchy enabling simple attitude stabilization to full waypoint navigation. The developed ground station unit, a laptop computer, communicates with the avionics package via 802.11b WiFi, displays the aircraft critical information, provides in-flight PID gain tunings, and uploads waypoints through a simple GUI. The system is installed in an off-the-shelf delta-wing R C aircraft and demonstrates its performance for aerial robotic applications", "The scheme to calculate attitude angle of UAV was introduced to replace high-priced angular velocity gyroscope, so as to lower the cost of UAV. A new eight-accelerometer configuration for attitude angle calculation of UAV was proposed. It can optimize algorithm, remove accumulative error in process of calculation and improve calculation precision through multi-sensor redundant information. The digital simulation results show that attitude angle calculation can satisfy current requirements and has low calculation error and time accumulation, which is of practical significance to research on inertia device." ] }
1609.07560
2964345570
A big challenge in environmental monitoring is the spatiotemporal variation of the phenomena to be observed. To enable persistent sensing and estimation in such a setting, it is beneficial to have a time-varying underlying environmental model. Here we present a planning and learning method that enables an autonomous marine vehicle to perform persistent ocean monitoring tasks by learning and refining an environmental model. To alleviate the computational bottleneck caused by large-scale data accumulated, we propose a framework that iterates between a planning component aimed at collecting the most information-rich data, and a sparse Gaussian Process learning component where the environmental model and hyperparameters are learned online by taking advantage of only a subset of data that provides the greatest contribution. Our simulations with ground-truth ocean data shows that the proposed method is both accurate and efficient.
Scientists are able to gain a greater understanding of the environmental processes (e.g., physical, chemical or biological parameters) through environmental sensing and monitoring @cite_18 . However, many environmental monitoring scenarios involve large environmental space and require considerable amount of work for collecting the data. Increasingly, a variety of autonomous robotic systems including marine vehicles @cite_17 , aerial vehicles @cite_16 , and ground vehicles @cite_21 , are designed and deployed for environmental monitoring in order to replace the conventional method that deploys static sensors to areas of interest @cite_22 . Particularly, the autonomous underwater vehicles (AUVs) such as marine gliders are becoming popular due to their long-range (hundreds of kilometers) and long-term (weeks even months) monitoring capabilities @cite_14 @cite_15 @cite_25 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_21", "@cite_15", "@cite_16", "@cite_25", "@cite_17" ], "mid": [ "2072931689", "1453435668", "2039802096", "2119816061", "2143872920", "2120225005", "2106353989", "85781994" ], "abstract": [ "Robotic systems are increasingly being utilized as fundamental data-gathering tools by scientists, allowing new perspectives and a greater understanding of the planet and its environmental processes. Today's robots are already exploring our deep oceans, tracking harmful algal blooms and pollution spread, monitoring climate variables, and even studying remote volcanoes. This article collates and discusses the significant advancements and applications of marine, terrestrial, and airborne robotic systems developed for environmental monitoring during the last two decades. Emerging research trends for achieving large-scale environmental monitoring are also reviewed, including cooperative robotic teams, robot and wireless sensor network (WSN) interaction, adaptive sampling and model-aided path planning. These trends offer efficient and precise measurement of environmental processes at unprecedented scales that will push the frontiers of robotic and natural sciences.", "Abstract The Amundsen Sea is one of the most productive polynyas in the Antarctic per unit area and is undergoing rapid changes including a reduction in sea ice duration, thinning ice sheets, retreat of glaciers and the potential collapse of the Thwaites Glacier in Pine Island Bay. A growing body of research has indicated that these changes are altering the water mass properties and associated biogeochemistry within the polynya. Unfortunately difficulties in accessing the remote location have greatly limited the amount of in situ data that has been collected. In this study data from a Teledyne-Webb Slocum glider was used to supplement ship-based sampling along the Dotson Ice Shelf (DIS). This autonomous underwater vehicle revealed a detailed view of a meltwater laden outflow from below the western flank of the DIS. Circumpolar Deep Water intruding onto the shelf drives glacial melt and the supply of macronutrients that, along with ample light, supports the large phytoplankton blooms in the Amundsen Sea Polynya. Less well understood is the source of micronutrients, such as iron, necessary to support this bloom to the central polynya where chlorophyll concentrations are highest. This outflow region showed decreasing optical backscatter with proximity to the bed indicating that particulate matter was sourced from the overlying glacier rather than resuspended sediment. This result suggests that particulate iron, and potentially phytoplankton primary productivity, is intrinsically linked to the magnitude and duration of sub-glacial melt from Circumpolar Deep Water intrusions onto the shelf.", "Traditionally, environmental monitoring is achieved by a small number of expensive and high precision sensing unities. Collected data are retrieved directly from the equipment at the end of the experiment and after the unit is recovered. The implementation of a wireless sensor network provides an alternative solution by deploying a larger number of disposable sensor nodes. Nodes are equipped with sensors with less precision, however, the network as a whole provides better spatial resolution of the area and the users can have access to the data immediately. This paper surveys a comprehensive review of the available solutions to support wireless sensor network environmental monitoring applications.", "In this paper we present initial experiments towards environmental monitoring with a mobile platform. A prototype of a pollution monitoring robot was set up which measures the gas distribution using an ldquoelectronic noserdquo and provides three dimensional wind measurements using an ultrasonic anemometer. We describe the design of the robot and the experimental setup used to run trials under varying environmental conditions. We then present the results of the gas distribution mapping. The trials which were carried out in three uncontrolled environments with very different properties: an enclosed indoor area, a part of a long corridor with open ends and a high ceiling, and an outdoor scenario are presented and discussed.", "The glider coordinated control system (GCCS) uses a detailed glider model for prediction and a simple particle model for planning to steer a fleet of underwater gliders to a set of coordinated trajectories. The GCCS also serves as a simulation testbed for the design and evaluation of multivehicle control laws. In this brief, we describe the GCCS and present experimental results for a virtual deployment in Monterey Bay, CA and a real deployment in Buzzards Bay, MA.", "Unmanned Aircraft Systems (UAS) have evolved rapidly over the past decade driven primarily by military uses, and have begun finding application among civilian users for earth sensing reconnaissance and scientific data collection purposes. Among UAS, promising characteristics are long flight duration, improved mission safety, flight repeatability due to improving autopilots, and reduced operational costs when compared to manned aircraft. The potential advantages of an unmanned platform, however, depend on many factors, such as aircraft, sensor types, mission objectives, and the current UAS regulatory requirements for operations of the particular platform. The regulations concerning UAS operation are still in the early development stages and currently present significant barriers to entry for scientific users. In this article we describe a variety of platforms, as well as sensor capabilities, and identify advantages of each as relevant to the demands of users in the scientific research sector. We also briefly discuss the current state of regulations affecting UAS operations, with the purpose of informing the scientific community about this developing technology whose potential for revolutionizing natural science observations is similar to those transformations that GIS and GPS brought to the community two decades ago.", "A full-scale adaptive ocean sampling network was deployed throughout the month-long 2006 Adaptive Sam- pling and Prediction (ASAP) field experiment in Monterey Bay, California. One of the central goals of the field experiment was to test and demonstrate newly developed techniques for coordinated motion control of au- tonomous vehicles carrying environmental sensors to efficiently sample the ocean. We describe the field results for the heterogeneous fleet of autonomous underwater gliders that collected data continuously throughout the month-long experiment. Six of these gliders were coordinated autonomously for 24 days straight using feed- back laws that scale with the number of vehicles. These feedback laws were systematically computed using recently developed methodology to produce desired collective motion patterns, tuned to the spatial and tem- poral scales in the sampled fields for the purpose of reducing statistical uncertainty in field estimates. The implementation was designed to allow for adaptation of coordinated sampling patterns using human-in-the- loop decision making, guided by optimization and prediction tools. The results demonstrate an innovative tool for ocean sampling and provide a proof of concept for an important field robotics endeavor that integrates coordinated motion control with adaptive sampling. C", "In this paper we present strategies for adaptive sampling using Autonomous Underwater Vehicle (AUV) ∞eets. The central theme of our strategies is the use of feedback that integrates distributed in-situ measurements into a coordinated mission planner. The measurements consist of GPS updates and estimated gradients of the environmental flelds (e.g., temperature) that are used to navigate the AUV ∞eets enabling effective front tracking and or feature detection. To this efiect these ∞eets are required to translate to collect and seek good data, expand contract to efiect changes in sensor resolution, and rotate and reconflgure to maximize sensing coverage, all while retaining a prescribed formation. These strategies play a key role in directing a cooperative ∞eet of autonomous underwater gliders in the flrst experiment of the O‐ce of Naval Research sponsored Autonomous Ocean Sampling Network II (AOSN-II) project in Monterey Bay, during August-September 2003. We present the coordination framework and investigate the efiectiveness of our sampling strategies in the context of AOSN-II via detailed simulations." ] }
1609.07560
2964345570
A big challenge in environmental monitoring is the spatiotemporal variation of the phenomena to be observed. To enable persistent sensing and estimation in such a setting, it is beneficial to have a time-varying underlying environmental model. Here we present a planning and learning method that enables an autonomous marine vehicle to perform persistent ocean monitoring tasks by learning and refining an environmental model. To alleviate the computational bottleneck caused by large-scale data accumulated, we propose a framework that iterates between a planning component aimed at collecting the most information-rich data, and a sparse Gaussian Process learning component where the environmental model and hyperparameters are learned online by taking advantage of only a subset of data that provides the greatest contribution. Our simulations with ground-truth ocean data shows that the proposed method is both accurate and efficient.
We are interested in the problem of collecting data about a scalar field of important environmental attributes such as temperature, salinity, or chlorophyll content of the ocean, and learn a model to best describe the environment (i.e., levels or contents of the chosen attribute at every spot in the entire field). However, the unknown environmental phenomena that we are interested in can be non-stationary @cite_5 . Fig. shows the variations of salinity data in the Southern California Bight region generated by the Regional Ocean Modeling Systems (ROMS) @cite_26 . In order to provide a good estimate of the state of the environment and maintain the prediction model at any time, the environmental sensing (information gathering) needs to be carried out persistently to catch up to possible variations @cite_12 .
{ "cite_N": [ "@cite_5", "@cite_26", "@cite_12" ], "mid": [ "2123110404", "2100986500", "" ], "abstract": [ "A key challenge of environmental sensing and monitoring is that of sensing, modeling, and predicting large-scale, spatially correlated environmental phenomena, especially when they are unknown and non-stationary. This paper presents a decentralized multi-robot active sensing (DEC-MAS) algorithm that can efficiently coordinate the exploration of multiple robots to gather the most informative observations for predicting an unknown, non-stationary phenomenon. By modeling the phenomenon using a Dirichlet process mixture of Gaussian processes (DPM-GPs), our work here is novel in demonstrating how DPM-GPs and its structural properties can be exploited to (a) formalize an active sensing criterion that trades off between gathering the most informative observations for estimating the unknown, non-stationary spatial correlation structure vs. that for predicting the phenomenon given the current, imprecise estimate of the correlation structure, and (b) support efficient decentralized coordination. We also provide a theoretical performance guarantee for DEC-MAS and analyze its time complexity. We empirically demonstrate using two real-world datasets that DEC-MAS outperforms state-of-the-art MAS algorithms.", "Abstract The purpose of this study is to find a combination of optimal numerical algorithms for time-stepping and mode-splitting suitable for a high-resolution, free-surface, terrain-following coordinate oceanic model. Due to mathematical feedback between the baroclinic momentum and tracer equations and, similarly, between the barotropic momentum and continuity equations, it is advantageous to treat both modes so that, after a time step for the momentum equation, the computed velocities participate immediately in the computation of tracers and continuity, and vice versa, rather than advancing all equations for one time step simultaneously. This leads to a new family of time-stepping algorithms that combine forward–backward feedback with the best known synchronous algorithms, allowing an increased time step due to the enhanced internal stability without sacrificing its accuracy. Based on these algorithms we design a split-explicit hydrodynamic kernel for a realistic oceanic model, which addresses multiple numerical issues associated with mode splitting. This kernel utilizes consistent temporal averaging of the barotropic mode via a specially designed filter function to guarantee both exact conservation and constancy preservation properties for tracers and yields more accurate (up to second-order), resolved barotropic processes, while preventing aliasing of unresolved barotropic signals into the slow baroclinic motions. It has a more accurate mode-splitting due to redefined barotropic pressure-gradient terms to account for the local variations in density field, while maintaining the computational efficiency of a split model. It is naturally compatible with a variety of centered and upstream-biased high-order advection algorithms, and helps to mitigate computational cost of expensive physical parameterization of mixing processes and submodels.", "" ] }
1609.07560
2964345570
A big challenge in environmental monitoring is the spatiotemporal variation of the phenomena to be observed. To enable persistent sensing and estimation in such a setting, it is beneficial to have a time-varying underlying environmental model. Here we present a planning and learning method that enables an autonomous marine vehicle to perform persistent ocean monitoring tasks by learning and refining an environmental model. To alleviate the computational bottleneck caused by large-scale data accumulated, we propose a framework that iterates between a planning component aimed at collecting the most information-rich data, and a sparse Gaussian Process learning component where the environmental model and hyperparameters are learned online by taking advantage of only a subset of data that provides the greatest contribution. Our simulations with ground-truth ocean data shows that the proposed method is both accurate and efficient.
We aim at estimating the current state of the environment and providing a nowcast (not forecast or hindcast) of the environment, via navigating the robots to collect the information. To model spatial phenomena, a common approach is to use a rich class of Gaussian Processes @cite_2 @cite_7 @cite_5 in spatial statistics. In this work, we also employ this broadly-adopted approach to build and learn an underlying model of interest.
{ "cite_N": [ "@cite_5", "@cite_7", "@cite_2" ], "mid": [ "2123110404", "1555599194", "" ], "abstract": [ "A key challenge of environmental sensing and monitoring is that of sensing, modeling, and predicting large-scale, spatially correlated environmental phenomena, especially when they are unknown and non-stationary. This paper presents a decentralized multi-robot active sensing (DEC-MAS) algorithm that can efficiently coordinate the exploration of multiple robots to gather the most informative observations for predicting an unknown, non-stationary phenomenon. By modeling the phenomenon using a Dirichlet process mixture of Gaussian processes (DPM-GPs), our work here is novel in demonstrating how DPM-GPs and its structural properties can be exploited to (a) formalize an active sensing criterion that trades off between gathering the most informative observations for estimating the unknown, non-stationary spatial correlation structure vs. that for predicting the phenomenon given the current, imprecise estimate of the correlation structure, and (b) support efficient decentralized coordination. We also provide a theoretical performance guarantee for DEC-MAS and analyze its time complexity. We empirically demonstrate using two real-world datasets that DEC-MAS outperforms state-of-the-art MAS algorithms.", "In many sensing applications, including environmental monitoring, measurement systems must cover a large space with only limited sensing resources. One approach to achieve required sensing coverage is to use robots to convey sensors within this space. Planning the motion of these robots - coordinating their paths in order to maximize the amount of information collected while placing bounds on their resources (e.g., path length or energy capacity) - is aNP-hard problem. In this paper, we present an efficient path planning algorithm that coordinates multiple robots, each having a resource constraint, to maximize the \"informativeness\" of their visited locations. In particular, we use a Gaussian Process to model the underlying phenomenon, and use the mutual information between the visited locations and remainder of the space to characterize the amount of information collected. We provide strong theoretical approximation guarantees for our algorithm by exploiting the submodularity property of mutual information. In addition, we improve the efficiency of our approach by extending the algorithm using branch and bound and a region-based decomposition of the space. We provide an extensive empirical analysis of our algorithm, comparing with existing heuristics on datasets from several real world sensing applications.", "" ] }
1609.07560
2964345570
A big challenge in environmental monitoring is the spatiotemporal variation of the phenomena to be observed. To enable persistent sensing and estimation in such a setting, it is beneficial to have a time-varying underlying environmental model. Here we present a planning and learning method that enables an autonomous marine vehicle to perform persistent ocean monitoring tasks by learning and refining an environmental model. To alleviate the computational bottleneck caused by large-scale data accumulated, we propose a framework that iterates between a planning component aimed at collecting the most information-rich data, and a sparse Gaussian Process learning component where the environmental model and hyperparameters are learned online by taking advantage of only a subset of data that provides the greatest contribution. Our simulations with ground-truth ocean data shows that the proposed method is both accurate and efficient.
Still, there are challenges: The first challenge lies in the model learning with the most useful sensing inputs, i.e., we wish to seek for the samples that best describe the environment. Navigating the robot to obtain such samples is called informative planning @cite_1 . In this work, we utilize the mutual information between visited locations and the remainder of the space to characterize the amount of information (information gain) collected.
{ "cite_N": [ "@cite_1" ], "mid": [ "2032239956" ], "abstract": [ "We introduce a graph-based informative path planning algorithm for a mobile robot which explicitly handles time. The objective function must be submodular in the samples taken by the robot, and the samples obtained are allowed to depend on the time at which the robot visits each location. Using a submodular objective function allows our algorithm to handle problems with diminishing returns, e.g. the case when taking a sample provides less utility when other nearby points have already been sampled. We give a formal description of this framework wherein an objective function that maps the path of the robot to the set of samples taken is defined. We also show how this framework can handle the case in which the robot takes samples along the edges of the graph. A proof of the approximation guarantee for the algorithm is given. Finally, quantitative results are shown for three problems: one simple example with a known Gaussian process model, one simulated example for an underwater robot planning problem using data from a well-known ocean modeling system, and one field experiment using an autonomous surface vehicle (ASV) measuring wireless signal strength on a lake." ] }
1609.07560
2964345570
A big challenge in environmental monitoring is the spatiotemporal variation of the phenomena to be observed. To enable persistent sensing and estimation in such a setting, it is beneficial to have a time-varying underlying environmental model. Here we present a planning and learning method that enables an autonomous marine vehicle to perform persistent ocean monitoring tasks by learning and refining an environmental model. To alleviate the computational bottleneck caused by large-scale data accumulated, we propose a framework that iterates between a planning component aimed at collecting the most information-rich data, and a sparse Gaussian Process learning component where the environmental model and hyperparameters are learned online by taking advantage of only a subset of data that provides the greatest contribution. Our simulations with ground-truth ocean data shows that the proposed method is both accurate and efficient.
Planning and environment monitoring are two big and well studied topics. Here we briefly review the works that are related to the informative planning and the model prediction with sparse GPs. Representative informative planning approaches include, for example, algorithms based on a recursive-greedy style @cite_7 @cite_12 where the informativeness is generalized as submodular functions and a sequential-allocation mechanism is designed in order to obtain subsequent waypoints. This recursive-greedy framework has been extended later by incorporating obstacle avoidance @cite_23 and diminishing returns @cite_1 . In addition, a differential entropy based framework @cite_20 @cite_0 was proposed where a batch of waypoints can be obtained through dynamic programming. We recently proposed a similar informative planning method based on the dynamic programming structure in order to compute the informative waypoints @cite_10 . This method is further extended here as an adaptive path planning component by incorporating the online learning and re-planning mechanisms. There are also many methods optimizing over complex deterministic and static information (e.g., see @cite_24 @cite_13 ).
{ "cite_N": [ "@cite_13", "@cite_7", "@cite_1", "@cite_0", "@cite_24", "@cite_23", "@cite_10", "@cite_12", "@cite_20" ], "mid": [ "2053460888", "1555599194", "2032239956", "2962707519", "2017117236", "2066291790", "2508520405", "", "" ], "abstract": [ "We propose a novel non-linear extension to the Orienteering Problem (OP), called the Correlated Orienteering Problem (COP). We use COP to plan informative tours (cyclic paths) for persistent monitoring of an environment with spatial correlations, where the tours are constrained to a fixed length or time budget. The main feature of COP is a quadratic utility function that captures spatial correlations among points of interest that are close to each other. COP may be solved using mixed integer quadratic programming (MIQP) that can plan multiple disjoint tours that maximize the quadratic utility function. We perform extensive characterization of our method to verify its correctness, as well as its applicability to the estimation of a realistic, time-varying, and spatially correlated scalar field.", "In many sensing applications, including environmental monitoring, measurement systems must cover a large space with only limited sensing resources. One approach to achieve required sensing coverage is to use robots to convey sensors within this space. Planning the motion of these robots - coordinating their paths in order to maximize the amount of information collected while placing bounds on their resources (e.g., path length or energy capacity) - is aNP-hard problem. In this paper, we present an efficient path planning algorithm that coordinates multiple robots, each having a resource constraint, to maximize the \"informativeness\" of their visited locations. In particular, we use a Gaussian Process to model the underlying phenomenon, and use the mutual information between the visited locations and remainder of the space to characterize the amount of information collected. We provide strong theoretical approximation guarantees for our algorithm by exploiting the submodularity property of mutual information. In addition, we improve the efficiency of our approach by extending the algorithm using branch and bound and a region-based decomposition of the space. We provide an extensive empirical analysis of our algorithm, comparing with existing heuristics on datasets from several real world sensing applications.", "We introduce a graph-based informative path planning algorithm for a mobile robot which explicitly handles time. The objective function must be submodular in the samples taken by the robot, and the samples obtained are allowed to depend on the time at which the robot visits each location. Using a submodular objective function allows our algorithm to handle problems with diminishing returns, e.g. the case when taking a sample provides less utility when other nearby points have already been sampled. We give a formal description of this framework wherein an objective function that maps the path of the robot to the set of samples taken is defined. We also show how this framework can handle the case in which the robot takes samples along the edges of the graph. A proof of the approximation guarantee for the algorithm is given. Finally, quantitative results are shown for three problems: one simple example with a known Gaussian process model, one simulated example for an underwater robot planning problem using data from a well-known ocean modeling system, and one field experiment using an autonomous surface vehicle (ASV) measuring wireless signal strength on a lake.", "A key problem of robotic environmental sensing and monitoring is that of active sensing: How can a team of robots plan the most informative observation paths to minimize the uncertainty in modeling and predicting an environmental phenomenon? This paper presents two principled approaches to efficient information-theoretic path planning based on entropy and mutual information criteria for in situ active sensing of an important broad class of widely-occurring environmental phenomena called anisotropic fields. Our proposed algorithms are novel in addressing a trade-off between active sensing performance and time efficiency. An important practical consequence is that our algorithms can exploit the spatial correlation structure of Gaussian process-based anisotropic fields to improve time efficiency while preserving near-optimal active sensing performance. We analyze the time complexity of our algorithms and prove analytically that they scale better than state-of-the-art algorithms with increasing planning horizon length. We provide theoretical guarantees on the active sensing performance of our algorithms for a class of exploration tasks called transect sampling, which, in particular, can be improved with longer planning time and or lower spatial correlation along the transect. Empirical evaluation on real-world anisotropic field data shows that our algorithms can perform better or at least as well as the state-of-the-art algorithms while often incurring a few orders of magnitude less computational time, even when the field conditions are less favorable.", "We present an online algorithm for a robot to shape its path to a locally optimal configuration for collecting information in an unknown dynamic environment. As the robot travels along its path, it identifies both where the environment is changing, and how fast it is changing. The algorithm then morphs the robot's path online to concentrate on the dynamic areas in the environment in proportion to their rate of change. A Lyapunov-like stability proof is used to show that, under our proposed path shaping algorithm, the path converges to a locally optimal configuration according to a Voronoi-based coverage criterion. The path shaping algorithm is then combined with a previously introduced speed controller to produce guaranteed persistent monitoring trajectories for a robot in an unknown dynamic environment. Simulation and experimental results with a quadrotor robot support the proposed approach.", "We present a path planning method for autonomous underwater vehicles in order to maximize mutual information. We adapt a method previously used for surface vehicles, and extend it to deal with the unique characteristics of underwater vehicles. We show how to generate near-optimal paths while ensuring that the vehicle stays out of high-traffic areas during predesignated time intervals. In our objective function we explicitly account for the fact that underwater vehicles typically take measurements while moving, and that they do not have the ability to communicate until they resurface. We present field results from ocean trials on planning paths for a specific AUV, an underwater glider.", "We propose an efficient path planning method for an autonomous underwater vehicle (AUV) used for the long-range and long-term ocean monitoring. We consider both the spatio-temporal variations of ocean phenomena and the disturbances caused by ocean currents, and design an approach integrating the information-theoretic and decision-theoretic planning frameworks. Specifically, the information-theoretic component employs a hierarchical structure and plans the most informative observation way-points for reducing the uncertainty of ocean phenomena modeling and prediction; whereas the decision-theoretic component plans local motions by taking into account the non-stationary ocean current disturbances. We validated the method through simulations with real ocean data.", "", "" ] }
1609.07560
2964345570
A big challenge in environmental monitoring is the spatiotemporal variation of the phenomena to be observed. To enable persistent sensing and estimation in such a setting, it is beneficial to have a time-varying underlying environmental model. Here we present a planning and learning method that enables an autonomous marine vehicle to perform persistent ocean monitoring tasks by learning and refining an environmental model. To alleviate the computational bottleneck caused by large-scale data accumulated, we propose a framework that iterates between a planning component aimed at collecting the most information-rich data, and a sparse Gaussian Process learning component where the environmental model and hyperparameters are learned online by taking advantage of only a subset of data that provides the greatest contribution. Our simulations with ground-truth ocean data shows that the proposed method is both accurate and efficient.
A critical problem for the persistent (long-term even life-long) tasks that one must consider is the large-scale accumulated data. Although affluent data might predict the most accurate model, in practice a huge amount of data are very likely to exceed the capacity of onboard computational hardware. Methods for reducing the computing burdens of GPs have been proposed. For example, GP regressions can be done in a real-time fashion where the problem can be estimated locally with local data @cite_8 . Another representative framework is a sparse representations of the GP model @cite_11 which is based on a combination of a Bayesian online algorithm together with a sequential construction of the most relevant subset of the data. This method allows the model to be refined in a recursive way as the data streams in. The framework has been further extended to many application domains such as visual tracking @cite_4 .
{ "cite_N": [ "@cite_11", "@cite_4", "@cite_8" ], "mid": [ "", "2143013621", "2134122536" ], "abstract": [ "", "We present a new Gaussian process (GP) inference algorithm, called online sparse matrix Gaussian processes (OSMGP), and demonstrate its merits by applying it to the problems of head pose estimation and visual tracking. The OSMGP is based upon the observation that for kernels with local support, the Gram matrix is typically sparse. Maintaining and updating the sparse Cholesky factor of the Gram matrix can be done efficiently using Givens rotations. This leads to an exact, online algorithm whose update time scales linearly with the size of the Gram matrix. Further, we provide a method for constant time operation of the OSMGP using matrix downdates. The downdates maintain the Cholesky factor at a constant size by removing certain rows and columns corresponding to discarded training examples. We demonstrate that, using these matrix downdates, online hyperparameter estimation can be included at cost linear in the number of total training examples. We describe a robust appearance-based head pose estimation system based upon the OSMGP. Numerous experiments and comparisons with existing methods using a large dataset system demonstrate the efficiency and accuracy of our system. Further, to showcase the applicability of OSMGP to a wide variety of problems, we also describe a regression-based visual tracking method. Experiments show that our OSMGP algorithm generalizes well using online learning.", "Learning in real-time applications, e.g., online approximation of the inverse dynamics model for model-based robot control, requires fast online regression techniques. Inspired by local learning, we propose a method to speed up standard Gaussian process regression (GPR) with local GP models (LGP). The training data is partitioned in local regions, for each an individual GP model is trained. The prediction for a query point is performed by weighted estimation using nearby local models. Unlike other GP approximations, such as mixtures of experts, we use a distance based measure for partitioning of the data and weighted prediction. The proposed method achieves online learning and prediction in real-time. Comparisons with other non-parametric regression methods show that LGP has higher accuracy than LWPR and close to the performance of standard GPR and ν-SVR." ] }
1609.07826
2527181044
This paper presents a new multi-view RGB-D dataset of nine kitchen scenes, each containing several objects in realistic cluttered environments including a subset of objects from the BigBird dataset. The viewpoints of the scenes are densely sampled and objects in the scenes are annotated with bounding boxes and in the 3D point cloud. Also, an approach for detection and recognition is presented, which is comprised of two parts: i) a new multi-view 3D proposal generation method and ii) the development of several recognition baselines using AlexNet to score our proposals, which is trained either on crops of the dataset or on synthetically composited training images. Finally, we compare the performance of the object proposals and a detection baseline to the Washington RGB-D Scenes (WRGB-D) dataset and demonstrate that our Kitchen scenes dataset is more challenging for object detection and recognition. The dataset is available at: this http URL
The problem of object detection has been studied extensively in a variety of domains using image only data or RGB-D images. To position our work, we review few representative approaches that have been applied in similar settings. Traditional methods for object detection in cluttered scenes follow the sliding window based pipelines where efficient methods for feature computation and classifier evaluation were developed such as DPM @cite_14 . Examples of using these models in the table top settings similar to ours include @cite_27 @cite_30 . Another commonly and effectively used strategy for object detection exploited the use of local features and correspondences between model reference image and the scene. Object detection and recognition systems that deal with textured household objects such as @cite_31 and @cite_0 take advantage of the discriminative nature of the local descriptors. A disadvantage of these local descriptors is that they usually perform poorly in the presence of non-textured objects, which led to alternative representations that capture object's shape properties such as the Shape Context @cite_10 .
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_0", "@cite_27", "@cite_31", "@cite_10" ], "mid": [ "", "2168356304", "2084635560", "2156222070", "2058761328", "2110379134" ], "abstract": [ "", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "We present an object recognition system which leverages the additional sensing and calibration information available in a robotics setting together with large amounts of training data to build high fidelity object models for a dataset of textured household objects. We then demonstrate how these models can be used for highly accurate detection and pose estimation in an end-to-end robotic perception system incorporating simultaneous segmentation, object classification, and pose fitting. The system can handle occlusions, illumination changes, multiple objects, and multiple instances of the same object. The system placed first in the ICRA 2011 Solutions in Perception instance recognition challenge. We believe the presented paradigm of building rich 3D models at training time and including depth information at test time is a promising direction for practical robotic perception systems.", "Over the last decade, the availability of public image repositories and recognition benchmarks has enabled rapid progress in visual object category and instance detection. Today we are witnessing the birth of a new generation of sensing technologies capable of providing high quality synchronized videos of both color and depth, the RGB-D (Kinect-style) camera. With its advanced sensing capabilities and the potential for mass adoption, this technology represents an opportunity to dramatically increase robotic object recognition, manipulation, navigation, and interaction capabilities. In this paper, we introduce a large-scale, hierarchical multi-view object dataset collected using an RGB-D camera. The dataset contains 300 objects organized into 51 categories and has been made publicly available to the research community so as to enable rapid progress based on this promising technology. This paper describes the dataset collection procedure and introduces techniques for RGB-D based object recognition and detection, demonstrating that combining color and depth information substantially improves quality of results.", "We present MOPED, a framework for Multiple Object Pose Estimation and Detection that seamlessly integrates single-image and multi-image object recognition and pose estimation in one optimized, robust, and scalable framework. We address two main challenges in computer vision for robotics: robust performance in complex scenes, and low latency for real-time operation. We achieve robust performance with Iterative Clustering Estimation (ICE), a novel algorithm that iteratively combines feature clustering with robust pose estimation. Feature clustering quickly partitions the scene and produces object hypotheses. The hypotheses are used to further refine the feature clusters, and the two steps iterate until convergence. ICE is easy to parallelize, and easily integrates single- and multi-camera object recognition and pose estimation. We also introduce a novel object hypothesis scoring function based on M-estimator theory, and a novel pose clustering algorithm that robustly handles recognition outliers. We achieve scalability and low latency with an improved feature matching algorithm for large databases, a GPU CPU hybrid architecture that exploits parallelism at all levels, and an optimized resource scheduler. We provide extensive experimental results demonstrating state-of-the-art performance in terms of recognition, scalability, and latency in real-world robotic applications.", "We develop an object detection method combining top-down recognition with bottom-up image segmentation. There are two main steps in this method: a hypothesis generation step and a verification step. In the top-down hypothesis generation step, we design an improved Shape Context feature, which is more robust to object deformation and background clutter. The improved Shape Context is used to generate a set of hypotheses of object locations and figure-ground masks, which have high recall and low precision rate. In the verification step, we first compute a set of feasible segmentations that are consistent with top-down object hypotheses, then we propose a False Positive Pruning (FPP) procedure to prune out false positives. We exploit the fact that false positive regions typically do not align with any feasible image segmentation. Experiments show that this simple framework is capable of achieving both high recall and high precision with only a few positive training examples and that this method can be generalized to many object classes." ] }
1609.07826
2527181044
This paper presents a new multi-view RGB-D dataset of nine kitchen scenes, each containing several objects in realistic cluttered environments including a subset of objects from the BigBird dataset. The viewpoints of the scenes are densely sampled and objects in the scenes are annotated with bounding boxes and in the 3D point cloud. Also, an approach for detection and recognition is presented, which is comprised of two parts: i) a new multi-view 3D proposal generation method and ii) the development of several recognition baselines using AlexNet to score our proposals, which is trained either on crops of the dataset or on synthetically composited training images. Finally, we compare the performance of the object proposals and a detection baseline to the Washington RGB-D Scenes (WRGB-D) dataset and demonstrate that our Kitchen scenes dataset is more challenging for object detection and recognition. The dataset is available at: this http URL
In an attempt to reduce the search space of the traditional sliding window techniques several recent works have concentrated in generating category-independent object proposals. Some representative works include Edge boxes @cite_26 , BING @cite_17 , and Selective search @cite_25 . In the RGB-D settings, @cite_5 uses object boundaries to guide the detection of fixation points that denote the presence of objects, while @cite_22 performs object discovery by ranking 3D mesh segments based on objectness scores. Our 3D multi-view approach eliminates large planar surfaces in the scenes to facilitate the segmentation of the small objects. Recently, proposal generation methods based on convolutional neural networks (CNN) have been introduced such as the Multibox @cite_4 , the DeepMask @cite_6 , and the Region Proposal Network (RPN) in @cite_19 . These methods perform very well on the settings they were trained for, but they require re-training in order to generalize to new settings.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_6", "@cite_19", "@cite_5", "@cite_25", "@cite_17" ], "mid": [ "7746136", "", "2049776679", "809122546", "2613718673", "1986342589", "2088049833", "2010181071" ], "abstract": [ "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.", "", "We present a method for discovering object models from 3D meshes of indoor environments. Our algorithm first decomposes the scene into a set of candidate mesh segments and then ranks each segment according to its “objectness” - a quality that distinguishes objects from clutter. To do so, we propose five intrinsic shape measures: compactness, symmetry, smoothness, and local and global convexity. We additionally propose a recurrence measure, codifying the intuition that frequently occurring geometries are more likely to correspond to complete objects. We evaluate our method in both supervised and unsupervised regimes on a dataset of 58 indoor scenes collected using an Open Source implementation of Kinect Fusion [1]. We show that our approach can reliably and efficiently distinguish objects from clutter, with Average Precision score of .92. We make our dataset available to the public.", "Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-the-art object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "Segmenting “simple” objects using low-level visual cues is an important capability for a vision system to learn in an unsupervised manner. We define a “simple” object as a compact region enclosed by depth and or contact boundary in the scene. We propose a segmentation process to extract all the “simple” objects that builds on the fixation-based segmentation framework [1] that segments a region given a point anywhere inside it. In this work, we augment that framework with a fixation strategy to automatically select points inside the “simple” objects and a post-segmentation process to select only the regions corresponding to the “simple” objects in the scene. A novel characteristic of our approach is the incorporation of border ownership, the knowledge about the object side of a boundary pixel. We evaluate the process on a publicly available RGB-D dataset [2] and find that the proposed method successfully extracts 91.4 of all objects in the dataset.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).", "Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, with a suitable resizing of their corresponding image windows in to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 × 8 and use the norm of the gradients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this feature, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g. ADD, BITWISE SHIFT, etc.). Experiments on the challenging PASCAL VOC 2007 dataset show that our method efficiently (300fps on a single laptop CPU) generates a small set of category-independent, high quality object windows, yielding 96.2 object detection rate (DR) with 1, 000 proposals. Increasing the numbers of proposals and color spaces for computing BING features, our performance can be further improved to 99.5 DR." ] }
1609.07826
2527181044
This paper presents a new multi-view RGB-D dataset of nine kitchen scenes, each containing several objects in realistic cluttered environments including a subset of objects from the BigBird dataset. The viewpoints of the scenes are densely sampled and objects in the scenes are annotated with bounding boxes and in the 3D point cloud. Also, an approach for detection and recognition is presented, which is comprised of two parts: i) a new multi-view 3D proposal generation method and ii) the development of several recognition baselines using AlexNet to score our proposals, which is trained either on crops of the dataset or on synthetically composited training images. Finally, we compare the performance of the object proposals and a detection baseline to the Washington RGB-D Scenes (WRGB-D) dataset and demonstrate that our Kitchen scenes dataset is more challenging for object detection and recognition. The dataset is available at: this http URL
Since the advent of deep learning methods, the choice of features for particular recognition tasks has been replaced by various alternatives for training or fine tuning deep CNNs or design of new architectures and optimization functions suited for various tasks. Early adoption of these techniques using CNNs such as R-CNN @cite_15 for object detection uses object proposal methods @cite_25 to find promising bounding boxes, extract the features using the network of @cite_28 and trains SVM classifiers to classify each bounding box to different categories. Recently, methods such as YOLO @cite_23 , SSD @cite_32 , and Faster RCNN @cite_19 drop the use of unsupervised proposal generation techniques and train their networks end-to-end to predict bounding boxes in addition to the classification score for each object category. Although these methods perform significantly well on challenging object detection benchmarks, they require large amounts of bounding box labeled training data.
{ "cite_N": [ "@cite_28", "@cite_32", "@cite_19", "@cite_23", "@cite_15", "@cite_25" ], "mid": [ "", "2193145675", "2613718673", "", "2102605133", "2088049833" ], "abstract": [ "", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )." ] }
1609.07843
2525332836
Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. Our pointer sentinel-LSTM model achieves state of the art language modeling performance on the Penn Treebank (70.9 perplexity) while using far fewer parameters than a standard softmax LSTM. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and larger corpora we also introduce the freely available WikiText corpus.
Beyond n-grams, neural sequence models such as recurrent neural networks have been shown to achieve state of the art results @cite_7 . A variety of RNN regularization methods have been explored, including a number of dropout variations @cite_9 @cite_14 which prevent overfitting of complex LSTM language models. Other work has improved language modeling performance by modifying the RNN architecture to better handle increased recurrence depth @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_14", "@cite_7" ], "mid": [ "2473934411", "1591801644", "", "1999965501" ], "abstract": [ "Many sequential processing tasks require complex nonlinear transition functions from one step to the next. However, recurrent neural networks with 'deep' transition functions remain difficult to train, even when using Long Short-Term Memory (LSTM) networks. We introduce a novel theoretical analysis of recurrent networks based on Gersgorin's circle theorem that illuminates several modeling and optimization issues and improves our understanding of the LSTM cell. Based on this analysis we propose Recurrent Highway Networks, which extend the LSTM architecture to allow step-to-step transition depths larger than one. Several language modeling experiments demonstrate that the proposed architecture results in powerful and efficient models. On the Penn Treebank corpus, solely increasing the transition depth from 1 to 10 improves word-level perplexity from 90.6 to 65.4 using the same number of parameters. On the larger Wikipedia datasets for character prediction (text8 and enwik8), RHNs outperform all previous results and achieve an entropy of 1.27 bits per character.", "We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.", "", "Recurrent neural network language models (RNNLMs) have recently demonstrated state-of-the-art performance across a variety of tasks. In this paper, we improve their performance by providing a contextual real-valued input vector in association with each word. This vector is used to convey contextual information about the sentence being modeled. By performing Latent Dirichlet Allocation using a block of preceding text, we achieve a topic-conditioned RNNLM. This approach has the key advantage of avoiding the data fragmentation associated with building multiple topic models on different data subsets. We report perplexity results on the Penn Treebank data, where we achieve a new state-of-the-art. We further apply the model to the Wall Street Journal speech recognition task, where we observe improvements in word-error-rate." ] }
1609.07843
2525332836
Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. Our pointer sentinel-LSTM model achieves state of the art language modeling performance on the Penn Treebank (70.9 perplexity) while using far fewer parameters than a standard softmax LSTM. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and larger corpora we also introduce the freely available WikiText corpus.
Extending this concept further, the latent predictor network @cite_20 generates an output sequence conditioned on an arbitrary number of base models where each base model may have differing granularity. In their task of code generation, the output could be produced one character at a time using a standard @math or instead copy entire words from referenced text fields using a pointer network. As opposed to , all states which produce the same output are merged by summing their probabilities. Their model however requires a more complex training process involving the forward-backward algorithm for Semi-Markov models to prevent an exponential explosion in potential paths.
{ "cite_N": [ "@cite_20" ], "mid": [ "2304240348" ], "abstract": [ "Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. Using this framework, we address the problem of generating programming code from a mixed natural language and structured specification. We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearthstone. On these, and a third preexisting corpus, we demonstrate that marginalising multiple predictors allows our model to outperform strong benchmarks." ] }
1609.07228
2526367648
Approximate nearest neighbor (ANN) search is a fundamental problem in many areas of data mining, machine learning and computer vision. The performance of traditional hierarchical structure (tree) based methods decreases as the dimensionality of data grows, while hashing based methods usually lack efficiency in practice. Recently, the graph based methods have drawn considerable attention. The main idea is that , which we refer as . These methods construct a @math -nearest neighbor ( @math NN) graph offline. And at online search stage, these methods find candidate neighbors of a query point in some way ( , random selection), and then check the neighbors of these candidate neighbors for closer ones iteratively. Despite some promising results, there are mainly two problems with these approaches: 1) These approaches tend to converge to local optima. 2) Constructing a @math NN graph is time consuming. We find that these two problems can be nicely solved when we provide a good initialization for NN-expansion. In this paper, we propose EFANNA, an extremely fast approximate nearest neighbor search algorithm based on @math NN Graph. Efanna nicely combines the advantages of hierarchical structure based methods and nearest-neighbor-graph based methods. Extensive experiments have shown that EFANNA outperforms the state-of-art algorithms both on approximate nearest neighbor search and approximate nearest neighbor graph construction. To the best of our knowledge, EFANNA is the fastest algorithm so far both on approximate nearest neighbor graph construction and approximate nearest neighbor search. A library EFANNA based on this research is released on Github.
Nearest neighbor search @cite_33 has been a hot topic during the last decades. Due to the intrinsic difficulty of exact nearest neighbor search, the approximate nearest neighbor (ANN) search algorithms @cite_24 @cite_35 are widely studied and the researchers expect to sacrifice a little searching accuracy to lower the time cost as much as possible.
{ "cite_N": [ "@cite_24", "@cite_35", "@cite_33" ], "mid": [ "2427881153", "2147717514", "2097921974" ], "abstract": [ "Consider a set of S of n data points in real d -dimensional space, R d , where distances are measured using any Minkowski metric. In nearest neighbor searching, we preprocess S into a data structure, so that given any query point q ∈ R d , is the closest point of S to q can be reported quickly. Given any positive real e, data point p is a (1 +e)- approximate nearest neighbor of q if its distance from q is within a factor of (1 + e) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of n points in R d in O(dn log n ) time and O(dn) space, so that given a query point q ∈ R d , and e > 0, a (1 + e)-approximate nearest neighbor of q can be computed in O ( c d , e log n ) time, where c d,e ≤ d 1 + 6d e ; d is a factor depending only on dimension and e. In general, we show that given an integer k ≥ 1, (1 + e)-approximations to the k nearest neighbors of q can be computed in additional O(kd log n ) time.", "We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers.", "We consider the computational problem of finding nearest neighbors in general metric spaces. Of particular interest are spaces that may not be conveniently embedded or approximated in Euclidian space, or where the dimensionality of a Euclidian representation 1s very high. Also relevant are high-dimensional Euclidian settings in which the distribution of data is in some sense of lower dimension and embedded in the space. The up-tree (vantage point tree) is introduced in several forms, together‘ with &&ciated algorithms, as an improved method for these difficult search nroblems. Tree construcI tion executes in O(nlog(n i ) time, and search is under certain circumstances and in the imit, O(log(n)) expected time. The theoretical basis for this approach is developed and the results of several experiments are reported. In Euclidian cases, kd-tree performance is compared." ] }
1609.07228
2526367648
Approximate nearest neighbor (ANN) search is a fundamental problem in many areas of data mining, machine learning and computer vision. The performance of traditional hierarchical structure (tree) based methods decreases as the dimensionality of data grows, while hashing based methods usually lack efficiency in practice. Recently, the graph based methods have drawn considerable attention. The main idea is that , which we refer as . These methods construct a @math -nearest neighbor ( @math NN) graph offline. And at online search stage, these methods find candidate neighbors of a query point in some way ( , random selection), and then check the neighbors of these candidate neighbors for closer ones iteratively. Despite some promising results, there are mainly two problems with these approaches: 1) These approaches tend to converge to local optima. 2) Constructing a @math NN graph is time consuming. We find that these two problems can be nicely solved when we provide a good initialization for NN-expansion. In this paper, we propose EFANNA, an extremely fast approximate nearest neighbor search algorithm based on @math NN Graph. Efanna nicely combines the advantages of hierarchical structure based methods and nearest-neighbor-graph based methods. Extensive experiments have shown that EFANNA outperforms the state-of-art algorithms both on approximate nearest neighbor search and approximate nearest neighbor graph construction. To the best of our knowledge, EFANNA is the fastest algorithm so far both on approximate nearest neighbor graph construction and approximate nearest neighbor search. A library EFANNA based on this research is released on Github.
Hierarchical index based (tree based) algorithms, such as KD-tree @cite_21 , have gained early success on approximate nearest neighbor search problems. However, it's proved to be inefficient when the dimensionality of data grows high. Many new hierarchical structure based methods @cite_37 @cite_22 @cite_31 are presented to address this limitation. Randomized KD-tree @cite_37 and Kmeans tree @cite_31 are absorbed into a well-known open source library FLANN @cite_27 , which has gained wide popularity.
{ "cite_N": [ "@cite_37", "@cite_22", "@cite_21", "@cite_27", "@cite_31" ], "mid": [ "2099253838", "1554174647", "", "1627400044", "2128017662" ], "abstract": [ "In this paper, we look at improving the KD-tree for a specific usage: indexing a large number of SIFT and other types of image descriptors. We have extended priority search, to priority search among multiple trees. By creating multiple KD-trees from the same data set and simultaneously searching among these trees, we have improved the KD-treepsilas search performance significantly.We have also exploited the structure in SIFT descriptors (or structure in any data set) to reduce the time spent in backtracking. By using Principal Component Analysis to align the principal axes of the data with the coordinate axes, we have further increased the KD-treepsilas search performance.", "Given user data, one often wants to find approximate matches in a large database. A good example of such a task is finding images similar to a given image in a large collection of images. We focus on the important and technically diffcult case where each data element is high dimensional, or more generally, is represented by a point in a large metric spaceand distance calculations are computationally expensive. In this paper we introduce a data structure to solve this problem called a GNAT Geometric Near-neighbor Access Tree. It is based on the philosophy that the data structure should act as a hierarchical geometrical model of the data as opposed to a simple decomposition of the data that does not use its intrinsic geometry. In experiments, we find that GNAT's outperform previous data structures in a number of applications. Keywords near neighbor, metric space, approximate queries, data mining, Dirichlet domains, Voronoi regions", "", "For many computer vision problems, the most time consuming component consists of nearest neighbor matching in high-dimensional spaces. There are no known exact algorithms for solving these high-dimensional problems that are faster than linear search. Approximate algorithms are known to provide large speedups with only minor loss in accuracy, but many such algorithms have been published with only minimal guidance on selecting an algorithm and its parameters for any given problem. In this paper, we describe a system that answers the question, “What is the fastest approximate nearest-neighbor algorithm for my data?” Our system will take any given dataset and desired degree of precision and use these to automatically determine the best algorithm and parameter values. We also describe a new algorithm that applies priority search on hierarchical k-means trees, which we have found to provide the best known performance on many datasets. After testing a range of alternatives, we have found that multiple randomized k-d trees provide the best performance for other datasets. We are releasing public domain code that implements these approaches. This library provides about one order of magnitude improvement in query time over the best previously available software and provides fully automated parameter selection.", "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images." ] }
1609.07228
2526367648
Approximate nearest neighbor (ANN) search is a fundamental problem in many areas of data mining, machine learning and computer vision. The performance of traditional hierarchical structure (tree) based methods decreases as the dimensionality of data grows, while hashing based methods usually lack efficiency in practice. Recently, the graph based methods have drawn considerable attention. The main idea is that , which we refer as . These methods construct a @math -nearest neighbor ( @math NN) graph offline. And at online search stage, these methods find candidate neighbors of a query point in some way ( , random selection), and then check the neighbors of these candidate neighbors for closer ones iteratively. Despite some promising results, there are mainly two problems with these approaches: 1) These approaches tend to converge to local optima. 2) Constructing a @math NN graph is time consuming. We find that these two problems can be nicely solved when we provide a good initialization for NN-expansion. In this paper, we propose EFANNA, an extremely fast approximate nearest neighbor search algorithm based on @math NN Graph. Efanna nicely combines the advantages of hierarchical structure based methods and nearest-neighbor-graph based methods. Extensive experiments have shown that EFANNA outperforms the state-of-art algorithms both on approximate nearest neighbor search and approximate nearest neighbor graph construction. To the best of our knowledge, EFANNA is the fastest algorithm so far both on approximate nearest neighbor graph construction and approximate nearest neighbor search. A library EFANNA based on this research is released on Github.
Both the hashing based methods and tree based methods have the same goal. They expect to put neighbors into the same hashing bucket (or node). However, there is no theoretical guarantee of this expectation. To increase the search (the number of true neighbors within the returned points divides by the number of required neighbors), one needs to check the nearby'' buckets or nodes. With high dimensional data, one polyhedron may have a large amount of neighbor polyhedrons, (for example, a bucket with 32 bit hashing code has 32 neighbor buckets with 1 hamming radius distance), which makes locating true neighbors hard @cite_29 .
{ "cite_N": [ "@cite_29" ], "mid": [ "2033000863" ], "abstract": [ "Recently, the hashing techniques have been widely applied to approximate the nearest neighbor search problem in many real applications. The basic idea of these approaches is to generate binary codes for data points which can preserve the similarity between any two of them. Given a query, instead of performing a linear scan of the entire data base, the hashing method can perform a linear scan of the points whose hamming distance to the query is not greater than rh, where rh is a constant. However, in order to find the true nearest neighbors, both the locating time and the linear scan time are proportional to O(Σ i=0 rh ( i c )) (c is the code length), which increase exponentially as r h increases. To address this limitation, we propose a novel algorithm named iterative expanding hashing in this paper, which builds an auxiliary index based on an offline constructed nearest neighbor table to avoid large r h . This auxiliary index can be easily combined with all the traditional hashing methods. Extensive experimental results over various real large-scale datasets demonstrate the superiority of the proposed approach." ] }
1609.07228
2526367648
Approximate nearest neighbor (ANN) search is a fundamental problem in many areas of data mining, machine learning and computer vision. The performance of traditional hierarchical structure (tree) based methods decreases as the dimensionality of data grows, while hashing based methods usually lack efficiency in practice. Recently, the graph based methods have drawn considerable attention. The main idea is that , which we refer as . These methods construct a @math -nearest neighbor ( @math NN) graph offline. And at online search stage, these methods find candidate neighbors of a query point in some way ( , random selection), and then check the neighbors of these candidate neighbors for closer ones iteratively. Despite some promising results, there are mainly two problems with these approaches: 1) These approaches tend to converge to local optima. 2) Constructing a @math NN graph is time consuming. We find that these two problems can be nicely solved when we provide a good initialization for NN-expansion. In this paper, we propose EFANNA, an extremely fast approximate nearest neighbor search algorithm based on @math NN Graph. Efanna nicely combines the advantages of hierarchical structure based methods and nearest-neighbor-graph based methods. Extensive experiments have shown that EFANNA outperforms the state-of-art algorithms both on approximate nearest neighbor search and approximate nearest neighbor graph construction. To the best of our knowledge, EFANNA is the fastest algorithm so far both on approximate nearest neighbor graph construction and approximate nearest neighbor search. A library EFANNA based on this research is released on Github.
Recently graph based techniques have drawn considerable attention @cite_10 @cite_26 @cite_29 . The main idea of these methods is , which we refer as . At offline stage, they need to build a @math NN graph, which can be regraded as a big table recording the top @math closest neighbors of each point in database. At online stage, given a query point, they first assign the query some points as initial candidate neighbors, and then check the neighbors of the neighbors iteratively to locate closer neighbors. Graph Nearest neighbor Search (GNNS) @cite_26 randomly generate the initial candidate neighbors while Iterative Expanding Hashing (IEH) @cite_29 uses hashing algorithms to generate the initial candidate neighbors.
{ "cite_N": [ "@cite_29", "@cite_26", "@cite_10" ], "mid": [ "2033000863", "17346433", "" ], "abstract": [ "Recently, the hashing techniques have been widely applied to approximate the nearest neighbor search problem in many real applications. The basic idea of these approaches is to generate binary codes for data points which can preserve the similarity between any two of them. Given a query, instead of performing a linear scan of the entire data base, the hashing method can perform a linear scan of the points whose hamming distance to the query is not greater than rh, where rh is a constant. However, in order to find the true nearest neighbors, both the locating time and the linear scan time are proportional to O(Σ i=0 rh ( i c )) (c is the code length), which increase exponentially as r h increases. To address this limitation, we propose a novel algorithm named iterative expanding hashing in this paper, which builds an auxiliary index based on an offline constructed nearest neighbor table to avoid large r h . This auxiliary index can be easily combined with all the traditional hashing methods. Extensive experimental results over various real large-scale datasets demonstrate the superiority of the proposed approach.", "We introduce a new nearest neighbor search algorithm. The algorithm builds a nearest neighbor graph in an offline phase and when queried with a new point, performs hill-climbing starting from a randomly sampled node of the graph. We provide theoretical guarantees for the accuracy and the computational complexity and empirically show the effectiveness of this algorithm.", "" ] }
1609.07228
2526367648
Approximate nearest neighbor (ANN) search is a fundamental problem in many areas of data mining, machine learning and computer vision. The performance of traditional hierarchical structure (tree) based methods decreases as the dimensionality of data grows, while hashing based methods usually lack efficiency in practice. Recently, the graph based methods have drawn considerable attention. The main idea is that , which we refer as . These methods construct a @math -nearest neighbor ( @math NN) graph offline. And at online search stage, these methods find candidate neighbors of a query point in some way ( , random selection), and then check the neighbors of these candidate neighbors for closer ones iteratively. Despite some promising results, there are mainly two problems with these approaches: 1) These approaches tend to converge to local optima. 2) Constructing a @math NN graph is time consuming. We find that these two problems can be nicely solved when we provide a good initialization for NN-expansion. In this paper, we propose EFANNA, an extremely fast approximate nearest neighbor search algorithm based on @math NN Graph. Efanna nicely combines the advantages of hierarchical structure based methods and nearest-neighbor-graph based methods. Extensive experiments have shown that EFANNA outperforms the state-of-art algorithms both on approximate nearest neighbor search and approximate nearest neighbor graph construction. To the best of our knowledge, EFANNA is the fastest algorithm so far both on approximate nearest neighbor graph construction and approximate nearest neighbor search. A library EFANNA based on this research is released on Github.
Instead of initializing the @math NN graph randomly, @cite_9 @cite_11 @cite_34 @cite_3 uses some divide-and-conquer methods. Their initialization contains two parts. Firstly, they divide the whole data set into small subsets multiple times. Secondly, they do brute force search within the subsets and get lots of overlapping subgraphs. These subgraphs can be merged together to serve as the initialization of @math NN graph. The NN-expansion like techniques can then be used to refine the graph. The division step of @cite_9 is based on a spectral bisection and they proposed two different versions, overlap and glue division. @cite_34 use Anchor Graph Hashing @cite_16 to produce the division. @cite_11 uses recursive random division, dividing orthogonal to the principle direction of randomly sampled data in subsets. @cite_3 uses random projection trees to partition the datasets.
{ "cite_N": [ "@cite_9", "@cite_34", "@cite_3", "@cite_16", "@cite_11" ], "mid": [ "2130502756", "206566442", "2951527381", "2251864938", "" ], "abstract": [ "Nearest neighbor graphs are widely used in data mining and machine learning. A brute-force method to compute the exact kNN graph takes Θ(dn2) time for n data points in the d dimensional Euclidean space. We propose two divide and conquer methods for computing an approximate kNN graph in Θ(dnt) time for high dimensional data (large d). The exponent t ∈ (1,2) is an increasing function of an internal parameter α which governs the size of the common region in the divide step. Experiments show that a high quality graph can usually be obtained with small overlaps, that is, for small values of t. A few of the practical details of the algorithms are as follows. First, the divide step uses an inexpensive Lanczos procedure to perform recursive spectral bisection. After each conquer step, an additional refinement step is performed to improve the accuracy of the graph. Finally, a hash table is used to avoid repeating distance calculations during the divide and conquer process. The combination of these techniques is shown to yield quite effective algorithms for building kNN graphs.", "The k nearest neighbors (kNN) graph, perhaps the most popular graph in machine learning, plays an essential role for graph-based learning methods. Despite its many elegant properties, the brute force kNN graph construction method has computational complexity of O(n2), which is prohibitive for large scale data sets. In this paper, based on the divide-and-conquer strategy, we propose an efficient algorithm for approximating kNN graphs, which has the time complexity of O(l(d+logn)n) only (d is the dimensionality and l is usually a small number). This is much faster than most existing fast methods. Specifically, we engage the locality sensitive hashing technique to divide items into small subsets with equal size, and then build one kNN graph on each subset using the brute force method. To enhance the approximation quality, we repeat this procedure for several times to generate multiple basic approximate graphs, and combine them to yield a high quality graph. Compared with existing methods, the proposed approach has features that are: (1) much more efficient in speed (2) applicable to generic similarity measures; (3) easy to parallelize. Finally, on three benchmark large-scale data sets, our method beats existing fast methods with obvious advantages.", "We study the problem of visualizing large-scale and high-dimensional data in a low-dimensional (typically 2D or 3D) space. Much success has been reported recently by techniques that first compute a similarity structure of the data points and then project them into a low-dimensional space with the structure preserved. These two steps suffer from considerable computational costs, preventing the state-of-the-art methods such as the t-SNE from scaling to large-scale and high-dimensional data (e.g., millions of data points and hundreds of dimensions). We propose the LargeVis, a technique that first constructs an accurately approximated K-nearest neighbor graph from the data and then layouts the graph in the low-dimensional space. Comparing to t-SNE, LargeVis significantly reduces the computational cost of the graph construction step and employs a principled probabilistic model for the visualization step, the objective of which can be effectively optimized through asynchronous stochastic gradient descent with a linear time complexity. The whole procedure thus easily scales to millions of high-dimensional data points. Experimental results on real-world data sets demonstrate that the LargeVis outperforms the state-of-the-art methods in both efficiency and effectiveness. The hyper-parameters of LargeVis are also much more stable over different data sets.", "Hashing is becoming increasingly popular for efficient nearest neighbor search in massive databases. However, learning short codes that yield good search performance is still a challenge. Moreover, in many cases real-world data lives on a low-dimensional manifold, which should be taken into account to capture meaningful nearest neighbors. In this paper, we propose a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes. To make such an approach computationally feasible, we utilize Anchor Graphs to obtain tractable low-rank adjacency matrices. Our formulation allows constant time hashing of a new data point by extrapolating graph Laplacian eigenvectors to eigenfunctions. Finally, we describe a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy. Experimental comparison with the other state-of-the-art methods on two large datasets demonstrates the efficacy of the proposed method.", "" ] }
1609.07228
2526367648
Approximate nearest neighbor (ANN) search is a fundamental problem in many areas of data mining, machine learning and computer vision. The performance of traditional hierarchical structure (tree) based methods decreases as the dimensionality of data grows, while hashing based methods usually lack efficiency in practice. Recently, the graph based methods have drawn considerable attention. The main idea is that , which we refer as . These methods construct a @math -nearest neighbor ( @math NN) graph offline. And at online search stage, these methods find candidate neighbors of a query point in some way ( , random selection), and then check the neighbors of these candidate neighbors for closer ones iteratively. Despite some promising results, there are mainly two problems with these approaches: 1) These approaches tend to converge to local optima. 2) Constructing a @math NN graph is time consuming. We find that these two problems can be nicely solved when we provide a good initialization for NN-expansion. In this paper, we propose EFANNA, an extremely fast approximate nearest neighbor search algorithm based on @math NN Graph. Efanna nicely combines the advantages of hierarchical structure based methods and nearest-neighbor-graph based methods. Extensive experiments have shown that EFANNA outperforms the state-of-art algorithms both on approximate nearest neighbor search and approximate nearest neighbor graph construction. To the best of our knowledge, EFANNA is the fastest algorithm so far both on approximate nearest neighbor graph construction and approximate nearest neighbor search. A library EFANNA based on this research is released on Github.
@cite_34 and @cite_3 claim to outperform NN-descent @cite_25 significantly. However, based on their reported results and our analysis, there seems a misunderstanding of NN-descent @cite_25 . Actually, NN-descent is quite different than NN-expansion. The method compared in @cite_34 and @cite_3 should be NN-expansion instead of NN-descent. Please see Section for details.
{ "cite_N": [ "@cite_34", "@cite_25", "@cite_3" ], "mid": [ "206566442", "", "2951527381" ], "abstract": [ "The k nearest neighbors (kNN) graph, perhaps the most popular graph in machine learning, plays an essential role for graph-based learning methods. Despite its many elegant properties, the brute force kNN graph construction method has computational complexity of O(n2), which is prohibitive for large scale data sets. In this paper, based on the divide-and-conquer strategy, we propose an efficient algorithm for approximating kNN graphs, which has the time complexity of O(l(d+logn)n) only (d is the dimensionality and l is usually a small number). This is much faster than most existing fast methods. Specifically, we engage the locality sensitive hashing technique to divide items into small subsets with equal size, and then build one kNN graph on each subset using the brute force method. To enhance the approximation quality, we repeat this procedure for several times to generate multiple basic approximate graphs, and combine them to yield a high quality graph. Compared with existing methods, the proposed approach has features that are: (1) much more efficient in speed (2) applicable to generic similarity measures; (3) easy to parallelize. Finally, on three benchmark large-scale data sets, our method beats existing fast methods with obvious advantages.", "", "We study the problem of visualizing large-scale and high-dimensional data in a low-dimensional (typically 2D or 3D) space. Much success has been reported recently by techniques that first compute a similarity structure of the data points and then project them into a low-dimensional space with the structure preserved. These two steps suffer from considerable computational costs, preventing the state-of-the-art methods such as the t-SNE from scaling to large-scale and high-dimensional data (e.g., millions of data points and hundreds of dimensions). We propose the LargeVis, a technique that first constructs an accurately approximated K-nearest neighbor graph from the data and then layouts the graph in the low-dimensional space. Comparing to t-SNE, LargeVis significantly reduces the computational cost of the graph construction step and employs a principled probabilistic model for the visualization step, the objective of which can be effectively optimized through asynchronous stochastic gradient descent with a linear time complexity. The whole procedure thus easily scales to millions of high-dimensional data points. Experimental results on real-world data sets demonstrate that the LargeVis outperforms the state-of-the-art methods in both efficiency and effectiveness. The hyper-parameters of LargeVis are also much more stable over different data sets." ] }
1609.07302
2524774860
Social Engineering (SE) is one of the most dangerous aspect an attacker can use against a given entity (private citizen, industry, government, ...). In order to perform SE attacks, it is necessary to collect as much information as possible about the target (or victim(s)). The aim of this paper is to report the details of an activity which took to the development of an automatic tool that extracts, categorizes and summarizes the target interests, thus possible weaknesses with respect to specific topics. Data is collected from the user's activity on social networks, parsed and analyzed using text mining techniques. The main contribution of the proposed tool consists in delivering some reports that allow the citizen, institutions as well as private bodies the screening of their exposure to SE attacks, with a strong awareness potential that will be reflected in a decrease of the risks and a good opportunity to save money.
Social Engineering can be describe as the art of influencing people to obtain sensible information, a Social Engineer manipulates the victim and convince her to divulgate confidential information. Cognitive bias, that can be described as a specific attribute of human decision-making process, is at the base of those techniques. Following the category proposed in @cite_4 , Social Engineering approaches can be divided as:
{ "cite_N": [ "@cite_4" ], "mid": [ "2153245338" ], "abstract": [ "Social engineering has emerged as a serious threat in virtual communities and is an effective means to attack information systems. The services used by today's knowledge workers prepare the ground for sophisticated social engineering attacks. The growing trend towards BYOD (bring your own device) policies and the use of online communication and collaboration tools in private and business environments aggravate the problem. In globally acting companies, teams are no longer geographically co-located, but staffed just-in-time. The decrease in personal interaction combined with a plethora of tools used for communication (e-mail, IM, Skype, Dropbox, LinkedIn, Lync, etc.) create new attack vectors for social engineering attacks. Recent attacks on companies such as the New York Times and RSA have shown that targeted spear-phishing attacks are an effective, evolutionary step of social engineering attacks. Combined with zero-day-exploits, they become a dangerous weapon that is often used by advanced persistent threats. This paper provides a taxonomy of well-known social engineering attacks as well as a comprehensive overview of advanced social engineering attacks on the knowledge worker." ] }
1609.07034
2951387829
Abstractive summarization is an ideal form of summarization since it can synthesize information from multiple documents to create concise informative summaries. In this work, we aim at developing an abstractive summarizer. First, our proposed approach identifies the most important document in the multi-document set. The sentences in the most important document are aligned to sentences in other documents to generate clusters of similar sentences. Second, we generate K-shortest paths from the sentences in each cluster using a word-graph structure. Finally, we select sentences from the set of shortest paths generated from all the clusters employing a novel integer linear programming (ILP) model with the objective of maximizing information content and readability of the final summary. Our ILP model represents the shortest paths as binary variables and considers the length of the path, information score and linguistic quality score in the objective function. Experimental results on the DUC 2004 and 2005 multi-document summarization datasets show that our proposed approach outperforms all the baselines and state-of-the-art extractive summarizers as measured by the ROUGE scores. Our method also outperforms a recent abstractive summarization technique. In manual evaluation, our approach also achieves promising results on informativeness and readability.
More recently, Mehdad proposed a supervised approach for meeting summarization, in which they generate an entailment graph of sentences. The nodes in the graph are the linked sentences and edges are the entailment relations between nodes; such relations help to identify non-redundant and informative sentences. Their fusion approach used MSC @cite_0 , which generates an informative sentence by combining several sentences in a word-graph structure. However, Filippova's method produces low linguistic quality as the ranking of generated sentences is based on edge weights calculated only using word collocations. By contrast, our method selects sentences by jointly maximizing informativeness and readability and generates informative, well-formed and readable summaries.
{ "cite_N": [ "@cite_0" ], "mid": [ "2160017075" ], "abstract": [ "We consider the task of summarizing a cluster of related sentences with a short sentence which we call multi-sentence compression and present a simple approach based on shortest paths in word graphs. The advantage and the novelty of the proposed method is that it is syntaxlean and requires little more than a tokenizer and a tagger. Despite its simplicity, it is capable of generating grammatical and informative summaries as our experiments with English and Spanish data demonstrate." ] }
1609.07060
2524699314
When recovering an unknown signal from noisy measurements, the computational difficulty of performing optimal Bayesian MMSE (minimum mean squared error) inference often necessitates the use of maximum a posteriori (MAP) inference, a special case of regularized M-estimation, as a surrogate. However, MAP is suboptimal in high dimensions, when the number of unknown signal components is similar to the number of measurements. In this work we demonstrate, when the signal distribution and the likelihood function associated with the noise are both log-concave, that optimal MMSE performance is asymptotically achievable via another M-estimation procedure. This procedure involves minimizing convex loss and regularizer functions that are nonlinearly smoothed versions of the widely applied MAP optimization problem. Our findings provide a new heuristic derivation and interpretation for recent optimal M-estimators found in the setting of linear measurements and additive noise, and further extend these results to nonlinear measurements with non-additive noise. We numerically demonstrate superior performance of our optimal M-estimators relative to MAP. Overall, at the heart of our work is the revelation of a remarkable equivalence between two seemingly very different computational problems: namely that of high dimensional Bayesian integration underlying MMSE inference, and high dimensional convex optimization underlying M-estimation. In essence we show that the former difficult integral may be computed by solving the latter, simpler optimization problem.
Seminal work @cite_19 found the optimal unregularized M-estimator using variational methods in the special case of linear measurements and additive noise, i.e. @math in . In this same setting, @cite_9 characterized unregularized M-estimator performance via approximate message passing (AMP) @cite_14 . Following this, the performance of regularized M-estimators in the linear additive setting was characterized in @cite_17 , using non-rigorous statistical physics methods based on replica theory, and in @cite_1 , using rigorous methods different from @cite_19 @cite_9 . Moreover, @cite_17 found the optimal regularized M-estimator and demonstrated, surprisingly, zero performance gap relative to MMSE. The goals of this paper are to (1) interpret and extend previous work by deriving an equivalence between optimal M-estimation and Bayesian MMSE inference via AMP and (2) to derive the optimal M-estimator in the more general setting of nonlinear measurements and non-additive noise.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_1", "@cite_19", "@cite_17" ], "mid": [ "2175784154", "2108394050", "2340785287", "2132235473", "2339846782" ], "abstract": [ "‘Approximate message passing’ algorithms proved to be extremely effective in reconstructing sparse signals from a small number of incoherent linear measurements. Extensive numerical experiments further showed that their dynamics is accurately tracked by a simple one-dimensional iteration termed state evolution. In this paper we provide the first rigorous foundation to state evolution. We prove that indeed it holds asymptotically in the large system limit for sensing matrices with iid gaussian entries. While our focus is on message passing algorithms for compressed sensing, the analysis extends beyond this setting, to a general class of algorithms on dense graphs. In this context, state evolution plays the role that density evolution has for sparse graphs.", "In a recent article, El (Proc Natl Acad Sci 110(36):14557–14562, 2013) study the distribution of robust regression estimators in the regime in which the number of parameters p is of the same order as the number of samples n. Using numerical simulations and ‘highly plausible’ heuristic arguments, they unveil a striking new phenomenon. Namely, the regression coefficients contain an extra Gaussian noise component that is not explained by classical concepts such as the Fisher information matrix. We show here that that this phenomenon can be characterized rigorously using techniques that were developed by the authors for analyzing the Lasso estimator under high-dimensional asymptotics. We introduce an approximate message passing (AMP) algorithm to compute M-estimators and deploy state evolution to evaluate the operating characteristics of AMP and so also M-estimates. Our analysis clarifies that the ‘extra Gaussian noise’ encountered in this problem is fundamentally similar to phenomena already studied for regularized least squares in the setting (n<p ).", "A general approach for estimating an unknown signal x0 ∈ ℝn from noisy, linear measurements y = Ax0 + z ∈ ℝm is via solving a so called regularized M-estimator: x := arg minx ℒ(y−Ax)+λf(x). Here, ℒ is a convex loss function, f is a convex (typically, non-smooth) regularizer, and, λ > 0 a regularizer parameter. We analyze the squared error performance ∥x − x0∥22 of such estimators in the high-dimensional proportional regime where m, n → ∞ and m n → δ. We let the design matrix A have entries iid Gaussian, and, impose minimal and rather mild regularity conditions on the loss function, on the regularizer, and, on the distributions of the noise and of the unknown signal. Under such a generic setting, we show that the squared error converges in probability to a nontrivial limit that is computed by solving four nonlinear equations on four scalar unknowns. We identify a new summary parameter, termed the expected Moreau envelope, which determines how the choice of the loss function and of the regularizer affects the error performance. The result opens the way for answering optimality questions regarding the choice of the loss function, the regularizer, the penalty parameter, etc.", "We consider, in the modern setting of high-dimensional statistics, the classic problem of optimizing the objective function in regression using M-estimates when the error distribution is assumed to be known. We propose an algorithm to compute this optimal objective function that takes into account the dimensionality of the problem. Although optimality is achieved under assumptions on the design matrix that will not always be satisfied, our analysis reveals generally interesting families of dimension-dependent objective functions.", "To model modern large-scale datasets, we need efficient algorithms to infer a set of @math unknown model parameters from @math noisy measurements. What are fundamental limits on the accuracy of parameter inference, given finite signal-to-noise ratios, limited measurements, prior information, and computational tractability requirements? How can we combine prior information with measurements to achieve these limits? Classical statistics gives incisive answers to these questions as the measurement density @math . However, these classical results are not relevant to modern high-dimensional inference problems, which instead occur at finite @math . We formulate and analyze high-dimensional inference as a problem in the statistical physics of quenched disorder. Our analysis uncovers fundamental limits on the accuracy of inference in high dimensions, and reveals that widely cherished inference algorithms like maximum likelihood (ML) and maximum-a posteriori (MAP) inference cannot achieve these limits. We further find optimal, computationally tractable algorithms that can achieve these limits. Intriguingly, in high dimensions, these optimal algorithms become computationally simpler than MAP and ML, while still outperforming them. For example, such optimal algorithms can lead to as much as a 20 reduction in the amount of data to achieve the same performance relative to MAP. Moreover, our analysis reveals simple relations between optimal high dimensional inference and low dimensional scalar Bayesian inference, insights into the nature of generalization and predictive power in high dimensions, information theoretic limits on compressed sensing, phase transitions in quadratic inference, and connections to central mathematical objects in convex optimization theory and random matrix theory." ] }
1609.07102
2525039815
Annotating semantic data with metadata is becoming more and more important to provide information about the statements being asserted. While initial solutions proposed a data model to represent a specific dimension of meta-information (such as time or provenance), the need for a general annotation framework which allows representing different context dimensions is needed. In this paper, we extend the 4dFluents ontology by Welty and Fikes---on associating temporal validity to statements---to any dimension of context, and discuss possible issues that multidimensional context representations have to face and how we address them.
In a later work @cite_4 , zamborlini_ontologically-founded_2013 focus on solving the issues of the prior approaches for representing events and properties of individuals. They maintain the fluent-like representation for events, but move to an N-ary representation (see below) for properties. However, they still not address the possibility to have more that one domain relation, nor address how inheritance is performed in OWL.
{ "cite_N": [ "@cite_4" ], "mid": [ "2081752276" ], "abstract": [ "An important challenge in the Knowledge Representation area is on representing and reasoning over temporally changing information. Particularly, a number of authors have been investigating approaches to extend the expressivity beyond what is currently supported by the DL (Description Logics) based languages in order to address this issue, while maintaining compatibility with subclasses of DLs adopted in the Semantic Web. This is mainly due to the increasing popularity of the Semantic Web initiative as well as the role played by DL in that context. In this paper we defend the need of a higher-level foundational framework based on results coming from the discipline of Formal Ontology. We present two complementary proposals for modeling temporally changing information in OWL, based on the most discussed strategy in the literature to address this problem, namely, the use of a perdurantist (or 4D) view of domain entities. Moreover we compare the results with some related work and discuss its limitations and further improvements." ] }
1609.07288
2951072332
For a set @math of @math people and a set @math of @math items, with each person having a preference list that ranks some items in order of preference, we consider the problem of matching every person with a unique item. A matching @math is popular if for any other matching @math , the number of people who prefer @math to @math is not less than the number of those who prefer @math to @math . For given @math and @math , consider the probability of existence of a popular matching when each person's preference list is independently and uniformly generated at random. Previously, Mahdian showed that when people's preference lists are strict (containing no ties) and complete (containing all items in @math ), if @math , where @math is the root of equation @math , then a popular matching exists with probability @math ; and if @math , where @math is the root of equation @math , then a popular matching exists with probability @math ; and if @math , then a popular matching exists with probability @math .
While a popular matching does not always exist, McCutchen @cite_10 introduced two measures of the of a matching, the unpopularity factor and the unpopularity margin, and showed that the problem of finding a matching that minimizes either measure is an NP-hard problem. @cite_12 later gave algorithms to find a matching with bounded values of these measures in certain instances. @cite_7 introduced the concept of a , which is a probability distribution over matchings, and proved that a mixed matching that is popular always exists.
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_12" ], "mid": [ "1525480452", "1496931966", "1529648317" ], "abstract": [ "We consider the problem of choosing the best matching of people to positions based on preferences expressed by the people, for which many different optimality criteria have been proposed. A matching is popular if no other matching beats it in a majority vote of the people. The popularity criterion has a manipulation-resistance property, but unfortunately, some sets of preferences admit no popular matching. In this paper, we introduce the least-unpopularity-factor and least-unpopularity-margin criteria, two generalizations of popularity that preserve the manipulation-resistance property but give an optimal matching for every set of preferences. Under each of these generalizations, we show that the \"badness\" of a given matching can be calculated efficiently but it is NP-hard to find an optimal matching.", "", "We investigate the following problem: given a set of jobs and a set of people with preferences over the jobs, what is the optimal way of matching people to jobs? Here we consider the notion of popularity. A matching Mis popular if there is no matching Mi¾? such that more people prefer Mi¾? to Mthan the other way around. Determining whether a given instance admits a popular matching and, if so, finding one, was studied in [2]. If there is no popular matching, a reasonable substitute is a matching whose unpopularityis bounded. We consider two measures of unpopularity - unpopularity factordenoted by u(M) and unpopularity margindenoted by g(M). McCutchen recently showed that computing a matching Mwith the minimum value of u(M) or g(M) is NP-hard, and that if Gdoes not admit a popular matching, then we have u(M) i¾? 2 for all matchings Min G. Here we show that a matching Mthat achieves u(M) = 2 can be computed in @math time (where mis the number of edges in Gand nis the number of nodes) provided a certain graph Hadmits a matching that matches all people. We also describe a sequence of graphs: H= H 2 , H 3 ,...,H k such that if H k admits a matching that matches all people, then we can compute in @math time a matching Msuch that u(M) ≤ ki¾? 1 and @math . Simulation results suggest that our algorithm finds a matching with low unpopularity." ] }
1609.07220
2950025489
A substring Q of a string S is called a shortest unique substring (SUS) for interval [s,t] in S, if Q occurs exactly once in S, this occurrence of Q contains interval [s,t], and every substring of S which contains interval [s,t] and is shorter than Q occurs at least twice in S. The SUS problem is, given a string S, to preprocess S so that for any subsequent query interval [s,t] all the SUSs for interval [s,t] can be answered quickly. When s = t, we call the SUSs for [s,t] as point SUSs, and when s t, we call the SUSs for [s,t] as interval SUSs. There exist optimal O(n)-time preprocessing scheme which answers queries in optimal O(k) time for both point and interval SUSs, where n is the length of S and k is the number of outputs for a given query. In this paper, we reveal structural, combinatorial properties underlying the SUS problem: Namely, we show that the number of intervals in S that correspond to point SUSs for all query positions in S is less than 1.5n, and show that this is a matching upper and lower bound. Also, we consider the maximum number of intervals in S that correspond to interval SUSs for all query intervals in S.
Xu @cite_3 introduced the () problem. An interval @math of a string @math is said to be an LR for interval @math if (a) the substring @math occurs at least twice in @math , (b) the occurrence @math of @math contains @math and (c) there does not exist an interval @math of @math such that @math , the substring @math occurs at least twice in @math , and the interval @math contains interval @math . The point and interval LR problems are defined analogously as the point and interval SUS problems, respectively. Xu @cite_3 presented an optimal algorithm which, after @math -time preprocessing, returns all LRs for a given interval in @math time, where @math is the number of output LRs. He claimed that although the point interval SUS problems and the point interval LR problems look alike, these problems are actually quite different, with a support from an example where an SUS and LR for the same query point seem rather unrelated.
{ "cite_N": [ "@cite_3" ], "mid": [ "2950223108" ], "abstract": [ "A longest repeat query on a string, motivated by its applications in many subfields including computational biology, asks for the longest repetitive substring(s) covering a particular string position (point query). In this paper, we extend the longest repeat query from point query to , allowing the search for longest repeat(s) covering any position interval, and thus significantly improve the usability of the solution. Our method for interval query takes a different approach using the insight from a recent work on [1], as the prior work's approach for point query becomes infeasible in the setting of interval query. Using the critical insight from [1], we propose an indexing structure, which can be constructed in the optimal @math time and space for a string of size @math , such that any future interval query can be answered in @math time. Further, our solution can find longest repeats covering any given interval using optimal @math time, where @math is the number of longest repeats covering that given interval, whereas the prior @math -time and space work can find only one candidate for each point query. Experiments with real-world biological data show that our proposal is competitive with prior works, both time and space wise, while providing with the new functionality of interval queries as opposed to point queries provided by prior works." ] }
1609.07152
2525954470
This paper presents the input convex neural network architecture. These are scalar-valued (potentially deep) neural networks with constraints on the network parameters such that the output of the network is a convex function of (some of) the inputs. The networks allow for efficient inference via optimization over some inputs to the network given others, and can be applied to settings including structured prediction, data imputation, reinforcement learning, and others. In this paper we lay the basic groundwork for these models, proposing methods for inference, optimization and learning, and analyze their representational power. We show that many existing neural network architectures can be made input-convex with only minor modification, and develop specialized optimization algorithms tailored to this setting. Finally, we highlight the performance of the methods on multi-label prediction, image completion, and reinforcement learning problems, where we show improvement over the existing state of the art in many cases.
The interplay between inference, optimization, and structured prediction has a long history in neural networks. Several early incarnations of neural networks were explicitly trained to produce structured sequences (e.g. @cite_0 ), and there was an early appreciation that structured models like hidden Markov models could be combined with the outputs of neural networks . Much of this earlier work is surveyed and synthesized by @cite_1 , who give a tutorial on these energy based learning methods. In recent years, there has been a strong push to further incorporate structured prediction methods like conditional random fields as the last layer'' of a deep network architecture . Several methods have proposed to build general neural networks over joint input and output spaces, and perform inference over outputs using generic optimization techniques such as Generative Adversarial Networks (GANs) and Structured Prediction Energy Networks (SPENs) . SPENs provide a deep structure over input and output spaces that performs the inference in as a non-convex optimization problem.
{ "cite_N": [ "@cite_0", "@cite_1" ], "mid": [ "2115046692", "2161914416" ], "abstract": [ "The backpropagation algorithm can be used for both recognition and generation of time trajectories. When used as a recognizer, it has been shown that the performance of a network can be greatly improved by adding structure to the architecture. The same is true in trajectory generation. In particular a new architecture corresponding to a \"reversed\" TDNN is proposed. Results show dramatic improvement of performance in the generation of hand-written characters. A combination of TDNN and reversed TDNN for compact encoding is also suggested.", "Energy-Based Models (EBMs) capture dependencies between variables by associating a scalar energy to each configuration of the variab les. Inference consists in clamping the value of observed variables and finding config urations of the remaining variables that minimize the energy. Learning consists in finding an energy function in which observed configurations of the variables a re given lower energies than unobserved ones. The EBM approach provides a common theoretical framework for many learning models, including traditional discr iminative and generative approaches, as well as graph-transformer networks, co nditional random fields, maximum margin Markov networks, and several manifold learning methods. Probabilistic models must be properly normalized, which sometimes requires evaluating intractable integrals over the space of all poss ible variable configurations. Since EBMs have no requirement for proper normalization, this problem is naturally circumvented. EBMs can be viewed as a form of non-probabilistic factor graphs, and they provide considerably more flexibility in th e design of architectures and training criteria than probabilistic approaches ." ] }
1609.07190
2527400238
Third-party services form an integral part of the mobile ecosystem: they allow app developers to add features such as performance analytics and social network integration, and to monetize their apps by enabling user tracking and targeted ad delivery. At present users, researchers, and regulators all have at best limited understanding of this third-party ecosystem. In this paper we seek to shrink this gap. Using data from users of our ICSI Haystack app we gain a rich view of the mobile ecosystem: we identify and characterize domains associated with mobile advertising and user tracking, thereby taking an important step towards greater transparency. We furthermore outline our steps towards a public catalog and census of analytics services, their behavior, their personal data collection processes, and their use across mobile apps.
Static and dynamic analysis of apps have also had limited success in identifying the prevalence of advertising and tracking services. The work by Chen al @cite_30 used dynamic analysis of Android apps to uncover pervasive leakages of sensitive data and to measure the penetration of libraries for advertising and analytics across apps. Other studies instead leveraged static analysis of app source code to identify 190 embedded tracking libraries @cite_22 .
{ "cite_N": [ "@cite_30", "@cite_22" ], "mid": [ "2018029308", "2613601501" ], "abstract": [ "In this paper we investigate the risk of privacy leakage through mobile analytics services and demonstrate the ease with which an external adversary can extract individual's profile and mobile applications usage information, through two major mobile analytics services, i.e. Google Mobile App Analytics and Flurry. We also demonstrate that it is possible to exploit the vulnerability of analytics services, to influence the ads served to users' devices, by manipulating the profiles constructed by these services. Both attacks can be performed without the necessity of having an attacker controlled app on user's mobile device. Finally, we discuss potential countermeasures (from the perspectives of different parties) that may be utilized to mitigate the risk of individual's personal information leakage.", "In this paper, we highlight a potential privacy threat in the current smartphone platforms, which allows any third party to collect a snapshot of installed applications without the user's consent. This can be exploited by third parties to infer various user attributes similar to what is done through tracking. We show that using only installed apps, user's gender, a demographic attribute that is frequently used in targeted advertising, can be instantly predicted with an accuracy around 70 , by training a classifier using established supervised learning techniques." ] }
1609.07190
2527400238
Third-party services form an integral part of the mobile ecosystem: they allow app developers to add features such as performance analytics and social network integration, and to monetize their apps by enabling user tracking and targeted ad delivery. At present users, researchers, and regulators all have at best limited understanding of this third-party ecosystem. In this paper we seek to shrink this gap. Using data from users of our ICSI Haystack app we gain a rich view of the mobile ecosystem: we identify and characterize domains associated with mobile advertising and user tracking, thereby taking an important step towards greater transparency. We furthermore outline our steps towards a public catalog and census of analytics services, their behavior, their personal data collection processes, and their use across mobile apps.
Techniques relying on static and dynamic analysis fall short in terms of scalability and app coverage @cite_19 --- they rely on Google Play crawlers to obtain the executable and cannot access pre-installed services. In fact, they may generate false positives as the presence of a library in an app's source-code does not necessarily imply that it actually gets invoked at runtime.
{ "cite_N": [ "@cite_19" ], "mid": [ "2099464953" ], "abstract": [ "Mobile networks are the most popular, fastest growing and least understood systems in today’s Internet ecosystem. Despite a large collection of privacy, policy and performance issues in mobile networks, users and researchers are faced with few options to characterize and address them. In this poster we present Meddle, a framework aimed at enhancing transparency in mobile networks and providing a platform that enables users (and researchers) control mobile traffic." ] }
1609.07042
2952828322
In this paper, we deal with two challenges for measuring the similarity of the subject identities in practical video-based face recognition - the variation of the head pose in uncontrolled environments and the computational expense of processing videos. Since the frame-wise feature mean is unable to characterize the pose diversity among frames, we define and preserve the overall pose diversity and closeness in a video. Then, identity will be the only source of variation across videos since the pose varies even within a single video. Instead of simply using all the frames, we select those faces whose pose point is closest to the centroid of the K-means cluster containing that pose point. Then, we represent a video as a bag of frame-wise deep face features while the number of features has been reduced from hundreds to K. Since the video representation can well represent the identity, now we measure the subject similarity between two videos as the max correlation among all possible pairs in the two bags of features. On the official 5,000 video-pairs of the YouTube Face dataset for face verification, our algorithm achieves a comparable performance with VGG-face that averages over deep features of all frames. Other vision tasks can also benefit from the generic idea of employing geometric cues to improve the descriptiveness of deep features.
The cosine similarity or correlation both are well-defined metrics for measuring the similarity of two images. A simple adaptation to videos will be randomly sampling a frame from each of the video. However, the correlation between two random image samples might characterize cues other than identity (say, the pose similarity). There are existing works on measuring the similarity of two videos using manifold-to-manifold distance @cite_3 . However, the straightforward extension of image-based correlation is preferred for its simplicity, such as temporal max or mean pooling @cite_0 . The impact of different spatial pooling methods in CNN such as mean pooling, max pooling and @math -2 pooling, has been discussed in the literature @cite_10 @cite_12 . However, pooling over the time domain is not as straightforward as spatial pooling. The frame-wise feature mean is a straightforward video-level representation and yet not a robust statistic. Despite that, temporal mean pooling is conventional to represent a video such as average pooling for video-level representation @cite_15 , mean encoding for face recognition @cite_8 , feature averaging for action recognition @cite_1 and mean pooling for video captioning @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_1", "@cite_3", "@cite_0", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "2136036867", "", "2951183276", "", "2964184470", "", "2162931300", "2090042335" ], "abstract": [ "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.", "", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.", "", "Recent studies have demonstrated the power of recurrent neural networks for machine translation, image captioning and speech recognition. For the task of capturing temporal structure in video, however, there still remain numerous open research questions. Current research suggests using a simple temporal feature pooling strategy to take into account the temporal aspect of video. We demonstrate that this method is not sufficient for gesture recognition, where temporal information is more discriminative compared to general video classification tasks. We explore deep architectures for gesture recognition in video and propose a new end-to-end trainable neural network architecture incorporating temporal convolutions and bidirectional recurrence. Our main contributions are twofold; first, we show that recurrence is crucial for this task; second, we show that adding temporal convolutions leads to significant improvements. We evaluate the different approaches on the Montalbano gesture recognition dataset, where we achieve state-of-the-art results.", "", "Many modern visual recognition algorithms incorporate a step of spatial 'pooling', where the outputs of several nearby feature detectors are combined into a local or global 'bag of features', in a way that preserves task-related information while removing irrelevant details. Pooling is used to achieve invariance to image transformations, more compact representations, and better robustness to noise and clutter. Several papers have shown that the details of the pooling operation can greatly influence the performance, but studies have so far been purely empirical. In this paper, we show that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted. We provide a detailed theoretical analysis of max pooling and average pooling, and give extensive empirical comparisons for object recognition tasks.", "Many successful models for scene or object recognition transform low-level descriptors (such as Gabor filter responses, or SIFT descriptors) into richer representations of intermediate complexity. This process can often be broken down into two steps: (1) a coding step, which performs a pointwise transformation of the descriptors into a representation better adapted to the task, and (2) a pooling step, which summarizes the coded features over larger neighborhoods. Several combinations of coding and pooling schemes have been proposed in the literature. The goal of this paper is threefold. We seek to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules (hard and soft vector quantization, sparse coding) and pooling schemes (by taking the average, or the maximum), which obtains state-of-the-art performance or better on several recognition benchmarks. We show how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding. We provide theoretical and empirical insight into the remarkable performance of max pooling. By teasing apart components shared by modern mid-level feature extractors, our approach aims to facilitate the design of better recognition architectures." ] }
1609.06831
2950909000
We propose an extension to Hawkes processes by treating the levels of self-excitation as a stochastic differential equation. Our new point process allows better approximation in application domains where events and intensities accelerate each other with correlated levels of contagion. We generalize a recent algorithm for simulating draws from Hawkes processes whose levels of excitation are stochastic processes, and propose a hybrid Markov chain Monte Carlo approach for model fitting. Our sampling procedure scales linearly with the number of required events and does not require stationarity of the point process. A modular inference procedure consisting of a combination between Gibbs and Metropolis Hastings steps is put forward. We recover expectation maximization as a special case. Our general approach is illustrated for contagion following geometric Brownian motion and exponential Langevin dynamics.
proposed an EM inference algorithm for Hawkes processes and applied to large social network datasets. Inspired by their latent variable set-up, we adapted some of their hidden variable formulation within the marked point process framework into our fully Bayesian inference setting. We have leveraged ideas from previous work on self-exciting processes to consequently treating the levels of excitation as random processes. introduced a multivariate point process combining self (Hawkes) and external (Cox) flavors to study latent networks in the data. These processes have also been proposed and applied in analyzing topic diffusion and user interactions @cite_11 @cite_12 . put forth a temporal point process model with one intensity being modulated by the other. Bounds of self exciting processes are also studied in @cite_4 . Differently from these, we breathe another dimension into Hawkes processes by modeling the contagion parameters as a stochastic differential equation equipped with general procedures for learning. This allows much more latitude in parameterizing the self-exciting processes as a basic building block before incorporating wider families of processes.
{ "cite_N": [ "@cite_4", "@cite_12", "@cite_11" ], "mid": [ "2057139325", "2127434196", "2952347589" ], "abstract": [ "Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function to be estimated by linear combinations of a fixed dictionary. To select coefficients, we propose an adaptive @math -penalization methodology, where data-driven weights of the penalty are derived from new Bernstein type inequalities for martingales. Oracle inequalities are established under assumptions on the Gram matrix of the dictionary. Non-asymptotic probabilistic results for multivariate Hawkes processes are proven, which allows us to check these assumptions by considering general dictionaries based on histograms, Fourier or wavelet bases. Motivated by problems of neuronal activities inference, we finally lead a simulation study for multivariate Hawkes processes and compare our methodology with the adaptive Lasso procedure proposed by Zou in Zou . We observe an excellent behavior of our procedure with respect to the problem of supports recovery. We rely on theoretical aspects for the essential question of tuning our methodology. Unlike adaptive Lasso of Zou , our tuning procedure is proven to be robust with respect to all the parameters of the problem, revealing its potential for concrete purposes, in particular in neuroscience.", "Diffusion network inference and meme tracking have been two key challenges in viral diffusion. This paper shows that these two tasks can be addressed simultaneously with a probabilistic model involving a mixture of mutually exciting point processes. A fast learning algorithms is developed based on mean-field variational inference with budgeted diffusion bandwidth. The model is demonstrated with applications to the diffusion of viral texts in (1) online social networks (e.g., Twitter) and (2) the blogosphere on the Web.", "Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected -- but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferring the underlying dynamics is of outstanding interest since it enables forecasting, influencing and retarding infections, broadly construed. To this end, we model diffusion processes as discrete networks of continuous temporal processes occurring at different rates. Given cascade data -- observed infection times of nodes -- we infer the edges of the global diffusion network and estimate the transmission rates of each edge that best explain the observed data. The optimization problem is convex. The model naturally (without heuristics) imposes sparse solutions and requires no parameter tuning. The problem decouples into a collection of independent smaller problems, thus scaling easily to networks on the order of hundreds of thousands of nodes. Experiments on real and synthetic data show that our algorithm both recovers the edges of diffusion networks and accurately estimates their transmission rates from cascade data." ] }
1609.06582
2950411628
Location data can be extremely useful to study commuting patterns and disruptions, as well as to predict real-time traffic volumes. At the same time, however, the fine-grained collection of user locations raises serious privacy concerns, as this can reveal sensitive information about the users, such as, life style, political and religious inclinations, or even identities. In this paper, we study the feasibility of crowd-sourced mobility analytics over aggregate location information: users periodically report their location, using a privacy-preserving aggregation protocol, so that the server can only recover aggregates -- i.e., how many, but not which, users are in a region at a given time. We experiment with real-world mobility datasets obtained from the Transport For London authority and the San Francisco Cabs network, and present a novel methodology based on time series modeling that is geared to forecast traffic volumes in regions of interest and to detect mobility anomalies in them. In the presence of anomalies, we also make enhanced traffic volume predictions by feeding our model with additional information from correlated regions. Finally, we present and evaluate a mobile app prototype, called Mobility Data Donors (MDD), in terms of computation, communication, and energy overhead, demonstrating the real-world deployability of our techniques.
Finally, @cite_39 propose FAST, an adaptive system for releasing real-time aggregate statistics with differential privacy. Their approach is based on a central authority that adaptively samples the time series according to detected data dynamics to minimize the overall privacy budget. They employ Kalman filtering to predict data at non-sampling points and estimate the true values from perturbed ones at sampling in order to improve the accuracy of data release. In follow-up work @cite_26 , they present a generic differentially-private framework for anomaly detection on aggregate statistics, focusing on detecting epidemic outbreak: real-time aggregate data is perturbed using FAST @cite_39 and released to an untrusted entity that performs the anomaly detection task. Whereas, we do not use differential privacy to protect users' privacy, as this would require the presence of a trusted aggregator and introduce a trade-off between privacy and utility that is challenging to tune.
{ "cite_N": [ "@cite_26", "@cite_39" ], "mid": [ "2068586682", "2171283104" ], "abstract": [ "Anomaly detection is an important problem that has been studied in a variety of application domains, ranging from syndrome surveillance for epidemic outbreaks to intrusion detection in computer networks. The data collected from individual users contain sensitive information, such as health records and network usage data, and thus need to be transformed prior to the release for privacy preservation. In this paper, we propose a novel framework for anomaly detection with differential privacy. Real-time private user data can be aggregated and perturbed to guarantee privacy, while the posterior estimate is released continuously for anomaly detection tasks. Our framework is not limited to any specific application domains. We illustrate the sensitivity analysis and evaluate our framework in the context of syndrome surveillance. Empirical results with simulated data sets confirm the effectiveness of our solution while providing provable privacy guarantee.", "Sharing real-time aggregate statistics of private data has given much benefit to the public to perform data mining for understanding important phenomena, such as Influenza outbreaks and traffic congestion. However, releasing time-series data with standard differential privacy mechanism has limited utility due to high correlation between data values. We propose FAST, an adaptive system to release real-time aggregate statistics under differential privacy with improved utility. To minimize overall privacy cost, FAST adaptively samples long time-series according to detected data dynamics. To improve the accuracy of data release per time stamp, filtering is used to predict data values at non-sampling points and to estimate true values from noisy observations at sampling points. Our experiments with three real data sets confirm that FAST improves the accuracy of time-series release and has excellent performance even under very small privacy cost." ] }
1609.06664
2522817691
Steganography is the discipline that deals with concealing the existence of secret communications. Existing research already provided several fundamentals for defining steganography and presented a multitude of hiding methods and countermeasures for this research discipline. We identified that no work exists that discusses the process of applying steganography from an individual's perspective. This paper presents a phase model that explains pre-conditions of applying steganography as well as the decision-making process and the final termination of a steganographic communication. The model can be used to explain whether an individual can use steganography and to explain whether and why an individual desires to use steganography. Moreover, the model can be used in research publications to indicate the addressed model's phase of scientific contributions. Furthermore, our model can be used to teach the process of steganography-application to students.
Simmons introduced his so-called that provides the fundamental scenario in which steganography is applied @cite_2 . In his scenario, two prisoners try to escape jail. For a successful escape, they need to work together. However, they cannot directly exchange messages. Instead, the only way to exchange messages is to hand over all messages to a so-called that can read and modify the messages. The prisoners need to apply steganography so that they can plan their escape without letting the warden notice. Simmons describes a generalized case of steganography-application that comprises two important elements that we discuss in our work: the reason that leads to the application of steganography as well as the decision-making process of the prisoners. Modifications of the Prisoner's Problem exist, e.g. @cite_27 .
{ "cite_N": [ "@cite_27", "@cite_2" ], "mid": [ "2473397309", "1878907771" ], "abstract": [ "A novel class of covert channel, out-of-band covert channels, is presented by extending Simmons’ prisoners’ problem. This new class of covert channel is established by surveying the existing covert channel, device-pairing, and side-channel research. Terminology as well as a taxonomy for out-of-band covert channels is also given. Additionally, a more comprehensive adversarial model based on a knowledgeable passive adversary and a capable active adversary is proposed in place of the current adversarial model, which relies on an oblivious passive adversary. Last, general protection mechanisms are presented, and an argument for a general measure of “covertness” to effectively compare covert channels is given.", "Two accomplices in a crime have been arrested and are about to be locked in widely separated cells. Their only means of communication after they are locked up will he by way of messages conveyed for them by trustees -- who are known to be agents of the warden. The warden is willing to allow the prisoners to exchange messages in the hope that he can deceive at least one of them into accepting as a genuine communication from the other either a fraudulent message created by the warden himself or else a modification by him of a genuine message. However, since he has every reason to suspect that the prisoners want to coordinate an escape plan, the warden will only permit the exchanges to occur if the information contained in the messages is completely open to him -- and presumably innocuous. The prisoners, on the other hand, are willing to accept these conditions, i.e., to accept some risk of deception in order to be able to communicate at all, since they need to coordinate their plans. To do this they will have to deceive the warden by finding a way of communicating secretly in the exchanges, i.e., of establishing a “subliminal channel” between them in full view of the warden, even though the messages themselves contain no secret (to the warden) information‡. Since they anticipate that the warden will try to deceive them by introducing fraudulent messages, they will only exchange messages if they are permitted to authenticate them." ] }
1609.06988
2949982045
Many objects, especially these made by humans, are symmetric, e.g. cars and aeroplanes. This paper addresses the estimation of 3D structures of symmetric objects from multiple images of the same object category, e.g. different cars, seen from various viewpoints. We assume that the deformation between different instances from the same object category is non-rigid and symmetric. In this paper, we extend two leading non-rigid structure from motion (SfM) algorithms to exploit symmetry constraints. We model the both methods as energy minimization, in which we also recover the missing observations caused by occlusions. In particularly, we show that by rotating the coordinate system, the energy can be decoupled into two independent terms, which still exploit symmetry, to apply matrix factorization separately on each of them for initialization. The results on the Pascal3D+ dataset show that our methods significantly improve performance over baseline methods.
There is a long history of using symmetry as a cue for computer vision tasks. For example, symmetry has been used in depth recovery @cite_7 @cite_20 @cite_15 as well as recognizing symmetric objects @cite_34 . Several geometric clues, including symmetry, planarity, orthogonality and parallelism have been taken into account for 3D scene reconstruction @cite_19 @cite_16 , in which the author used pre-computed camera rotation matrix by vanishing point @cite_30 . Recently, symmetry has been applied in more areas such as 3D mesh reconstruction with occlusion @cite_6 , and scene reconstruction @cite_17 . For 3D keypoints reconstruction, symmetry, incorporated with planarity and compactness prior, has also been studied in @cite_28 .
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_28", "@cite_16", "@cite_6", "@cite_19", "@cite_15", "@cite_34", "@cite_20", "@cite_17" ], "mid": [ "2139018239", "", "2171543226", "2143450077", "", "", "1973016074", "2029528398", "2009751730", "2081709959" ], "abstract": [ "We present a method for reconstruction of structured scenes from one or more views, in which the user provides image points and geometric knowledge -coplanarity, ratios of distances, angles- about the corresponding 3D points. First, the geometric information is analyzed. Then vanishing points are estimated, from which camera calibration is obtained. Finally, an algebraic method gives the reconstruction. Our algebraic reconstruction method improves the present state-of-the-art in many aspects : geometric knowledge includes not only planarity and alignment information, but also known ratios of lengths. The single and multipleview cases are treated in the same way and the method detects whether the input data is sufficient to define a rigid reconstruction. We benchmark, using synthetic data, the various steps of the estimation process and show reconstructions obtained from real-world situations in which other methods would fail. We also present a new method for maximum likelihood estimation of vanishing points.", "", "We present a new algorithm for reconstructing 3D shapes. The algorithm takes one 2D image of a 3D shape and reconstructs the 3D shape by applying a priori constraints: symmetry, planarity and compactness. The shape is reconstructed without using information about the surfaces, such as shading, texture, binocular disparity or motion. Performance of the algorithm is illustrated on symmetric polyhedra, but the algorithm can be applied to a very wide range of shapes. Psychophysical plausibility of the algorithm is discussed.", "We present a method to reconstruct from one or more images a scene that is rich in planes, alignments, symmetries, orthogonalities, and other forms of geometrical regularity. Given image points of interest and some geometric information, the method recovers least-squares estimates of the 3D points, camera position(s), orientation(s), and eventually calibration(s). Our contributions lie (i) in a novel way of exploiting some types of symmetry and of geometric regularity, (ii) in treating indifferently one or more images, (iii) in a geometric test that indicates whether the input data uniquely defines a reconstruction, and (iv) a parameterization method for collections of 3D points subject to geometric constraints. Moreover, the reconstruction algorithm lends itself to sensitivity analysis. The method is benchmarked on synthetic data and its effectiveness is shown on real-world data.", "", "", "We investigate the constraints placed on the image projection of a planar object having local reflectional symmetry. Under the affine approximation to projection, we demonstrate an efficient (low-complexity) algorithm for detecting and verifying symmetries despite the distorting effects of image skewing. The symmetries are utilized for three distinct tasks: first, determining image back-projection up to a similarity transformation ambiguity; second, determining the object plane orientation (slant and tilt); and third, as a test for non-coplanarity amongst a collection of objects. These results are illustrated throughout with examples from images of real scenes.", "According to the 1.5-views theorem (Poggio, Technical Report #9005-03, IRST, Povo, 1990; Ullman and Basri, IEEE Trans. PAMI 13, 992-1006, 1991) recognition of a specific 3D object (defined in terms of pointwise features) from a novel 2D view can be achieved from at least two 2D model views (for each object, for orthographic projection). This note considers how recognition can be achieved from a single 2D model view by exploiting prior knowledge of an object's symmetry. It is proved that, for any bilaterally symmetric 3D object, one non-accidental 2D model view is sufficient for recognition since it can be used to generate additional 'virtual' views. It is also proved that, for bilaterally symmetric objects, the correspondence of four points between two views determines the correspondence of all other points. Symmetries of higher order allow the recovery of Euclidean structure from a single 2D view.1", "A new technique dramatically simplifies the analysis of matching and depth reconstruction by extracting three-dimensional rigid depth interpretation from pairwise comparisons of weak perspective projections. This method provides a simple linear criterion for testing the correctness of correspondence for a pair of images; the method also provides a description of a one-parameter family of interpretations for each pair of images that satisfies this criterion. We show that if at least three projections of a volumetric object are known, then a three-dimensional (3D) rigid interpretation can be inferred from pairwise comparisons between any one of these images and other images in the set. The 3D interpretation is derived from the intersection of corresponding one-parameter families. The method provides a common computational basis for different processes of depth perception, for example, depth-from-stereo and depth-from-motion. Thus, a single mechanism for these processes in the human visual system would be sufficient. The proposed method does not require information about relative positions of eye(s) or camera(s) for different projections, but this information can be easily incorporated. The method can be applied for pairwise comparison within a single image. If any nontrivial correspondence is found, then several views of the same object are present in the same image. This happens, for example, in views of volumetrically symmetric objects. Symmetry facilitates depth reconstruction; if an object possesses two or more symmetries, its depth can be reconstructed from a single image.", "In this paper, we provide a principled explanation of how knowledge in global 3-D structural invariants, typically captured by a group action on a symmetric structure, can dramatically facilitate the task of reconstructing a 3-D scene from one or more images. More importantly, since every symmetric structure admits a “canonical” coordinate frame with respect to which the group action can be naturally represented, the canonical pose between the viewer and this canonical frame can be recovered too, which explains why symmetric objects (e.g., buildings) provide us overwhelming clues to their orientation and position. We give the necessary and sufficient conditions in terms of the symmetry (group) admitted by a structure under which this pose can be uniquely determined. We also characterize, when such conditions are not satisfied, to what extent this pose can be recovered. We show how algorithms from conventional multiple-view geometry, after properly modified and extended, can be directly applied to perform such recovery, from all “hidden images” of one image of the symmetric structure. We also apply our results to a wide range of applications in computer vision and image processing such as camera self-calibration, image segmentation and global orientation, large baseline feature matching, image rendering and photo editing, as well as visual illusions (caused by symmetry if incorrectly assumed)." ] }
1609.06578
2005110322
Aspect-based opinion mining is widely applied to review data to aggregate or summarize opinions of a product, and the current state-of-the-art is achieved with Latent Dirichlet Allocation (LDA)-based model. Although social media data like tweets are laden with opinions, their "dirty" nature (as natural language) has discouraged researchers from applying LDA-based opinion model for product review mining. Tweets are often informal, unstructured and lacking labeled data such as categories and ratings, making it challenging for product opinion mining. In this paper, we propose an LDA-based opinion model named Twitter Opinion Topic Model (TOTM) for opinion mining and sentiment analysis. TOTM leverages hashtags, mentions, emoticons and strong sentiment words that are present in tweets in its discovery process. It improves opinion prediction by modeling the target-opinion interaction directly, thus discovering target specific opinion words, neglected in existing approaches. Moreover, we propose a new formulation of incorporating sentiment prior information into a topic model, by utilizing an existing public sentiment lexicon. This is novel in that it learns and updates with the data. We conduct experiments on 9 million tweets on electronic products, and demonstrate the improved performance of TOTM in both quantitative evaluations and qualitative analysis. We show that aspect-based opinion analysis on massive volume of tweets provides useful opinions on products.
Latent Dirichlet Allocation (LDA) is a topic model that has been extended by many for sentiment analysis. Notable examples based on LDA include the MaxEnt-LDA hybrid model @cite_16 , Joint Sentiment Topic (JST) model @cite_45 , Multi-grain LDA (MG-LDA) @cite_25 , Interdependent LDA (ILDA) @cite_27 , Aspect and Sentiment Unification Model (ASUM) @cite_15 and Multi-Aspect Sentiment (MAS) model @cite_49 . The Topic-Sentiment Mixture (TSM) model @cite_28 performs sentiment analysis by utilizing the Multinomial distribution. These models perform aspect-based opinion analysis and they had been successfully applied to review data of different domains, such as electronic product, hotel and restaurant reviews. The task of summarizing the reviews is also known as opinion aggregation .
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_45", "@cite_49", "@cite_15", "@cite_16", "@cite_25" ], "mid": [ "2129294185", "2001587475", "2108420397", "2154970197", "2044429219", "2129604374", "2096110600" ], "abstract": [ "In this paper, we define the problem of topic-sentiment analysis on Weblogs and propose a novel probabilistic model to capture the mixture of topics and sentiments simultaneously. The proposed Topic-Sentiment Mixture (TSM) model can reveal the latent topical facets in a Weblog collection, the subtopics in the results of an ad hoc query, and their associated sentiments. It could also provide general sentiment models that are applicable to any ad hoc topics. With a specifically designed HMM structure, the sentiment models and topic models estimated with TSM can be utilized to extract topic life cycles and sentiment dynamics. Empirical experiments on different Weblog datasets show that this approach is effective for modeling the topic facets and sentiments and extracting their dynamics from Weblog collections. The TSM model is quite general; it can be applied to any text collections with a mixture of topics and sentiments, thus has many potential applications, such as search result summarization, opinion tracking, and user behavior prediction.", "Today, more and more product reviews become available on the Internet, e.g., product review forums, discussion groups, and Blogs. However, it is almost impossible for a customer to read all of the different and possibly even contradictory opinions and make an informed decision. Therefore, mining online reviews (opinion mining) has emerged as an interesting new research direction. Extracting aspects and the corresponding ratings is an important challenge in opinion mining. An aspect is an attribute or component of a product, e.g. 'screen' for a digital camera. It is common that reviewers use different words to describe an aspect (e.g. 'LCD', 'display', 'screen'). A rating is an intended interpretation of the user satisfaction in terms of numerical values. Reviewers usually express the rating of an aspect by a set of sentiments, e.g. 'blurry screen'. In this paper we present three probabilistic graphical models which aim to extract aspects and corresponding ratings of products from online reviews. The first two models extend standard PLSI and LDA to generate a rated aspect summary of product reviews. As our main contribution, we introduce Interdependent Latent Dirichlet Allocation (ILDA) model. This model is more natural for our task since the underlying probabilistic assumptions (interdependency between aspects and ratings) are appropriate for our problem domain. We conduct experiments on a real life dataset, Epinions.com, demonstrating the improved effectiveness of the ILDA model in terms of the likelihood of a held-out test set, and the accuracy of aspects and aspect ratings.", "Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework based on Latent Dirichlet Allocation (LDA), called joint sentiment topic model (JST), which detects sentiment and topic simultaneously from text. Unlike other machine learning approaches to sentiment classification which often require labeled corpora for classifier training, the proposed JST model is fully unsupervised. The model has been evaluated on the movie review dataset to classify the review sentiment polarity and minimum prior information have also been explored to further improve the sentiment classification accuracy. Preliminary experiments have shown promising results achieved by JST.", "Online reviews are often accompanied with numerical ratings provided by users for a set of service or product aspects. We propose a statistical model which is able to discover corresponding topics in text and extract textual evidence from reviews supporting each of these aspect ratings ‐ a fundamental problem in aspect-based sentiment summarization (Hu and Liu, 2004a). Our model achieves high accuracy, without any explicitly labeled data except the user provided opinion ratings. The proposed approach is general and can be used for segmentation in other applications where sequential data is accompanied with correlated signals.", "User-generated reviews on the Web contain sentiments about detailed aspects of products and services. However, most of the reviews are plain text and thus require much effort to obtain information about relevant details. In this paper, we tackle the problem of automatically discovering what aspects are evaluated in reviews and how sentiments for different aspects are expressed. We first propose Sentence-LDA (SLDA), a probabilistic generative model that assumes all words in a single sentence are generated from one aspect. We then extend SLDA to Aspect and Sentiment Unification Model (ASUM), which incorporates aspect and sentiment together to model sentiments toward different aspects. ASUM discovers pairs of aspect, sentiment which we call senti-aspects. We applied SLDA and ASUM to reviews of electronic devices and restaurants. The results show that the aspects discovered by SLDA match evaluative details of the reviews, and the senti-aspects found by ASUM capture important aspects that are closely coupled with a sentiment. The results of sentiment classification show that ASUM outperforms other generative models and comes close to supervised classification methods. One important advantage of ASUM is that it does not require any sentiment labels of the reviews, which are often expensive to obtain.", "Discovering and summarizing opinions from online reviews is an important and challenging task. A commonly-adopted framework generates structured review summaries with aspects and opinions. Recently topic models have been used to identify meaningful review aspects, but existing topic models do not identify aspect-specific opinion words. In this paper, we propose a MaxEnt-LDA hybrid model to jointly discover both aspects and aspect-specific opinion words. We show that with a relatively small amount of training data, our model can effectively identify aspect and opinion words simultaneously. We also demonstrate the domain adaptability of our model.", "In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews [18, 19, 7, 12, 27, 36, 21]. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., 'waitress' and 'bartender' are part of the same topic 'staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models." ] }
1609.06578
2005110322
Aspect-based opinion mining is widely applied to review data to aggregate or summarize opinions of a product, and the current state-of-the-art is achieved with Latent Dirichlet Allocation (LDA)-based model. Although social media data like tweets are laden with opinions, their "dirty" nature (as natural language) has discouraged researchers from applying LDA-based opinion model for product review mining. Tweets are often informal, unstructured and lacking labeled data such as categories and ratings, making it challenging for product opinion mining. In this paper, we propose an LDA-based opinion model named Twitter Opinion Topic Model (TOTM) for opinion mining and sentiment analysis. TOTM leverages hashtags, mentions, emoticons and strong sentiment words that are present in tweets in its discovery process. It improves opinion prediction by modeling the target-opinion interaction directly, thus discovering target specific opinion words, neglected in existing approaches. Moreover, we propose a new formulation of incorporating sentiment prior information into a topic model, by utilizing an existing public sentiment lexicon. This is novel in that it learns and updates with the data. We conduct experiments on 9 million tweets on electronic products, and demonstrate the improved performance of TOTM in both quantitative evaluations and qualitative analysis. We show that aspect-based opinion analysis on massive volume of tweets provides useful opinions on products.
Lexical information can be used to improve sentiment analysis. He @cite_33 used a sentiment lexicon to modify the priors of LDA for sentiment classification, though with an approach with ad hoc constants. @cite_41 incorporated a lexical dictionary into a non-negative matrix tri-factorization model, using a simple rule-based polarity assignment. Refer to @cite_23 and @cite_43 for a detailed review on applying lexicon-based methods in sentiment analysis. Instead of a lexicon, @cite_7 used seeded words as lexical priors for semi-supervised topic modeling.
{ "cite_N": [ "@cite_33", "@cite_7", "@cite_41", "@cite_43", "@cite_23" ], "mid": [ "2087045154", "1506246224", "2161228561", "2084046180", "1964613733" ], "abstract": [ "This article presents two novel approaches for incorporating sentiment prior knowledge into the topic model for weakly supervised sentiment analysis where sentiment labels are considered as topics. One is by modifying the Dirichlet prior for topic-word distribution (LDA-DP), the other is by augmenting the model objective function through adding terms that express preferences on expectations of sentiment labels of the lexicon words using generalized expectation criteria (LDA-GE). We conducted extensive experiments on English movie review data and multi-domain sentiment dataset as well as Chinese product reviews about mobile phones, digital cameras, MP3 players, and monitors. The results show that while both LDA-DP and LDA-GE perform comparably to existing weakly supervised sentiment classification algorithms, they are much simpler and computationally efficient, rendering them more suitable for online and real-time sentiment classification on the Web. We observed that LDA-GE is more effective than LDA-DP, suggesting that it should be preferred when considering employing the topic model for sentiment analysis. Moreover, both models are able to extract highly domain-salient polarity words from text.", "Topic models have great potential for helping users understand document corpora. This potential is stymied by their purely unsupervised nature, which often leads to topics that are neither entirely meaningful nor effective in extrinsic tasks (, 2009). We propose a simple and effective way to guide topic models to learn topics of specific interest to a user. We achieve this by providing sets of seed words that a user believes are representative of the underlying topics in a corpus. Our model uses these seeds to improve both topic-word distributions (by biasing topics to produce appropriate seed words) and to improve document-topic distributions (by biasing documents to select topics related to the seed words they contain). Extrinsic evaluation on a document clustering task reveals a significant improvement when using seed information, even over other models that use seed information naively.", "Sentiment classification refers to the task of automatically identifying whether a given piece of text expresses positive or negative opinion towards a subject at hand. The proliferation of user-generated web content such as blogs, discussion forums and online review sites has made it possible to perform large-scale mining of public opinion. Sentiment modeling is thus becoming a critical component of market intelligence and social media technologies that aim to tap into the collective wisdom of crowds. In this paper, we consider the problem of learning high-quality sentiment models with minimal manual supervision. We propose a novel approach to learn from lexical prior knowledge in the form of domain-independent sentiment-laden terms, in conjunction with domain-dependent unlabeled data and a few labeled documents. Our model is based on a constrained non-negative tri-factorization of the term-document matrix which can be implemented using simple update rules. Extensive experimental studies demonstrate the effectiveness of our approach on a variety of real-world sentiment prediction tasks.", "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.", "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly" ] }
1609.06686
2523437799
Convolutional neural networks (CNNs) have demonstrated superior capability for extracting information from raw signals in computer vision. Recently, character-level and multi-channel CNNs have exhibited excellent performance for sentence classification tasks. We apply CNNs to large-scale authorship attribution, which aims to determine an unknown text's author among many candidate authors, motivated by their ability to process character-level signals and to differentiate between a large number of classes, while making fast predictions in comparison to state-of-the-art approaches. We extensively evaluate CNN-based approaches that leverage word and character channels and compare them against state-of-the-art methods for a large range of author numbers, shedding new light on traditional approaches. We show that character-level CNNs outperform the state-of-the-art on four out of five datasets in different domains. Additionally, we present the first application of authorship attribution to reddit.
are pervasive in domains where the input can naturally be separated into different channels, e.g. color channels in computer vision, wave lengths in speech recognition @cite_12 . Natural language input is typically single-channel in the form of tokens or characters. Kim observe that a static word channel is able to encode general semantic similarities, while a non-static channel can be fine-tuned to the task at hand and improves performance on some datasets.
{ "cite_N": [ "@cite_12" ], "mid": [ "1542280630" ], "abstract": [ "Standard deep neural network-based acoustic models for automatic speech recognition (ASR) rely on hand-engineered input features, typically log-mel filterbank magnitudes. In this paper, we describe a convolutional neural network - deep neural network (CNN-DNN) acoustic model which takes raw multichannel waveforms as input, i.e. without any preceding feature extraction, and learns a similar feature representation through supervised training. By operating directly in the time domain, the network is able to take advantage of the signal's fine time structure that is discarded when computing filterbank magnitude features. This structure is especially useful when analyzing multichannel inputs, where timing differences between input channels can be used to localize a signal in space. The first convolutional layer of the proposed model naturally learns a filterbank that is selective in both frequency and direction of arrival, i.e. a bank of bandpass beamformers with an auditory-like frequency scale. When trained on data corrupted with noise coming from different spatial locations, the network learns to filter them out by steering nulls in the directions corresponding to the noise sources. Experiments on a simulated multichannel dataset show that the proposed acoustic model outperforms a DNN that uses log-mel filterbank magnitude features under noisy and reverberant conditions." ] }
1609.06686
2523437799
Convolutional neural networks (CNNs) have demonstrated superior capability for extracting information from raw signals in computer vision. Recently, character-level and multi-channel CNNs have exhibited excellent performance for sentence classification tasks. We apply CNNs to large-scale authorship attribution, which aims to determine an unknown text's author among many candidate authors, motivated by their ability to process character-level signals and to differentiate between a large number of classes, while making fast predictions in comparison to state-of-the-art approaches. We extensively evaluate CNN-based approaches that leverage word and character channels and compare them against state-of-the-art methods for a large range of author numbers, shedding new light on traditional approaches. We show that character-level CNNs outperform the state-of-the-art on four out of five datasets in different domains. Additionally, we present the first application of authorship attribution to reddit.
have been shown to outperform traditional classification methods on large-scale datasets @cite_7 . Their CNNs, however, require tens of thousands of per-class examples and thousands of training epochs, while our datasets only contain a few hundred examples per class.
{ "cite_N": [ "@cite_7" ], "mid": [ "1938755728" ], "abstract": [ "We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60 fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information." ] }
1609.06686
2523437799
Convolutional neural networks (CNNs) have demonstrated superior capability for extracting information from raw signals in computer vision. Recently, character-level and multi-channel CNNs have exhibited excellent performance for sentence classification tasks. We apply CNNs to large-scale authorship attribution, which aims to determine an unknown text's author among many candidate authors, motivated by their ability to process character-level signals and to differentiate between a large number of classes, while making fast predictions in comparison to state-of-the-art approaches. We extensively evaluate CNN-based approaches that leverage word and character channels and compare them against state-of-the-art methods for a large range of author numbers, shedding new light on traditional approaches. We show that character-level CNNs outperform the state-of-the-art on four out of five datasets in different domains. Additionally, we present the first application of authorship attribution to reddit.
is the task of identifying an unknown text's author among a set of candidate authors with applications ranging from plagiarism detection to Forensic Linguistics. The key notion behind statistical authorship attribution is that measuring textual features enables distinction between texts written by different authors @cite_14 . These features range from indicators of content divergence between authors such as bag-of-words to stylometric features that reflect an author's unique writing patterns, e.g. use of punctuation marks, emoticons, whitespace, etc. @cite_5 , and character and word n-grams @cite_8 .
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_8" ], "mid": [ "2295585256", "2126631960", "" ], "abstract": [ "Character n-grams have been identified as the most successful feature in both singledomain and cross-domain Authorship Attribution (AA), but the reasons for their discriminative value were not fully understood. We identify subgroups of charactern-grams that correspond to linguistic aspects commonly claimed to be covered by these features: morphosyntax, thematic content and style. We evaluate the predictiveness of each of these groups in two AA settings: a single domain setting and a cross-domain setting where multiple topics are present. We demonstrate that characterngrams that capture information about affixes and punctuation account for almost all of the power of character n-grams as features. Our study contributes new insights into the use of n-grams for future AA work and other classification tasks.", "Authorship attribution supported by statistical or computational methods has a long history starting from the 19th century and is marked by the seminal study of Mosteller and Wallace (1964) on the authorship of the disputed “Federalist Papers.” During the last decade, this scientific field has been developed substantially, taking advantage of research advances in areas such as machine learning, information retrieval, and natural language processing. The plethora of available electronic texts (e.g., e-mail messages, online forum messages, blogs, source code, etc.) indicates a wide variety of applications of this technology, provided it is able to handle short and noisy text from multiple candidate authors. In this article, a survey of recent advances of the automated approaches to attributing authorship is presented, examining their characteristics for both text representation and text classification. The focus of this survey is on computational requirements and settings rather than on linguistic or literary issues. We also discuss evaluation methodologies and criteria for authorship attribution studies and list open questions that will attract future work in this area. © 2009 Wiley Periodicals, Inc.", "" ] }
1609.06686
2523437799
Convolutional neural networks (CNNs) have demonstrated superior capability for extracting information from raw signals in computer vision. Recently, character-level and multi-channel CNNs have exhibited excellent performance for sentence classification tasks. We apply CNNs to large-scale authorship attribution, which aims to determine an unknown text's author among many candidate authors, motivated by their ability to process character-level signals and to differentiate between a large number of classes, while making fast predictions in comparison to state-of-the-art approaches. We extensively evaluate CNN-based approaches that leverage word and character channels and compare them against state-of-the-art methods for a large range of author numbers, shedding new light on traditional approaches. We show that character-level CNNs outperform the state-of-the-art on four out of five datasets in different domains. Additionally, we present the first application of authorship attribution to reddit.
Deep learning research has largely neglected authorship attribution; related work has instead focused on modeling an author's style: condition word embeddings on attributes such as style and predict an author's age, gender, and industry. transform image captions into book sentences by subtracting the 'style'. State-of-the-art authorship attribution algorithms have to handle possibly thousands of candidate authors and a limited number of examples per author in real-world applications but require CPU-days for prediction as they calculate pairwise distances between feature subsets @cite_19 . Simultaneously, character n-grams have proven to be the single most successful feature @cite_18 . Finally, compare traditional approaches on small datasets, while we evaluate state-of-the-art as well as CNN-based methods for thousands of authors, thereby moving a step closer to the goal of authorship attribution at web-scale.
{ "cite_N": [ "@cite_19", "@cite_18" ], "mid": [ "1964642694", "2949433733" ], "abstract": [ "Most previous work on authorship attribution has focused on the case in which we need to attribute an anonymous document to one of a small set of candidate authors. In this paper, we consider authorship attribution as found in the wild: the set of known candidates is extremely large (possibly many thousands) and might not even include the actual author. Moreover, the known texts and the anonymous texts might be of limited length. We show that even in these difficult cases, we can use similarity-based methods along with multiple randomized feature sets to achieve high precision. Moreover, we show the precise relationship between attribution precision and four parameters: the size of the candidate set, the quantity of known-text by the candidates, the length of the anonymous text and a certain robustness score associated with a attribution.", "Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This paper aims to align books to their movie releases in order to provide rich descriptive explanations for visual content that go semantically far beyond the captions available in current datasets. To align movies and books we exploit a neural sentence embedding that is trained in an unsupervised way from a large corpus of books, as well as a video-text neural embedding for computing similarities between movie clips and sentences in the book. We propose a context-aware CNN to combine information from multiple sources. We demonstrate good quantitative performance for movie book alignment and show several qualitative examples that showcase the diversity of tasks our model can be used for." ] }
1609.06657
2521524646
Visual Question Answering (VQA) task has showcased a new stage of interaction between language and vision, two of the most pivotal components of artificial intelligence. However, it has mostly focused on generating short and repetitive answers, mostly single words, which fall short of rich linguistic capabilities of humans. We introduce Full-Sentence Visual Question Answering (FSVQA) dataset, consisting of nearly 1 million pairs of questions and full-sentence answers for images, built by applying a number of rule-based natural language processing techniques to original VQA dataset and captions in the MS COCO dataset. This poses many additional complexities to conventional VQA task, and we provide a baseline for approaching and evaluating the task, on top of which we invite the research community to build further improvements.
A number of datasets on visual question answering have been introduced in recent years @cite_20 @cite_4 , among which @cite_24 in particular has gained the most attention and helped popularize the task. However, these datasets mostly consist of a small set of answers covering most of the questions, and most of the answers being single word. Our FSVQA dataset, derived from @cite_24 , minimizes such limitation by converting the answers to full-sentences, thus widely expanding the set of answers.
{ "cite_N": [ "@cite_24", "@cite_4", "@cite_20" ], "mid": [ "2950761309", "2949218037", "300525892" ], "abstract": [ "We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).", "This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.", "As language and visual understanding by machines progresses rapidly, we are observing an increasing interest in holistic architectures that tightly interlink both modalities in a joint learning and inference process. This trend has allowed the community to progress towards more challenging and open tasks and refueled the hope at achieving the old AI dream of building machines that could pass a turing test in open domains. In order to steadily make progress towards this goal, we realize that quantifying performance becomes increasingly difficult. Therefore we ask how we can precisely define such challenges and how we can evaluate different algorithms on this open tasks? In this paper, we summarize and discuss such challenges as well as try to give answers where appropriate options are available in the literature. We exemplify some of the solutions on a recently presented dataset of question-answering task based on real-world indoor images that establishes a visual turing challenge. Finally, we argue despite the success of unique ground-truth annotation, we likely have to step away from carefully curated dataset and rather rely on 'social consensus' as the main driving force to create suitable benchmarks. Providing coverage in this inherently ambiguous output space is an emerging challenge that we face in order to make quantifiable progress in this area." ] }
1609.06657
2521524646
Visual Question Answering (VQA) task has showcased a new stage of interaction between language and vision, two of the most pivotal components of artificial intelligence. However, it has mostly focused on generating short and repetitive answers, mostly single words, which fall short of rich linguistic capabilities of humans. We introduce Full-Sentence Visual Question Answering (FSVQA) dataset, consisting of nearly 1 million pairs of questions and full-sentence answers for images, built by applying a number of rule-based natural language processing techniques to original VQA dataset and captions in the MS COCO dataset. This poses many additional complexities to conventional VQA task, and we provide a baseline for approaching and evaluating the task, on top of which we invite the research community to build further improvements.
@cite_3 was one of the first to propose attention model for VQA. They proposed stacked attention networks (SANs) that utilize question representations to search for most relevant regions in the image. @cite_5 also built an attention-based model, which optimizes the network by minimizing the joint loss from all answering units. They further-proposed an early stopping strategy, in which overfitting units are disregarded in training.
{ "cite_N": [ "@cite_5", "@cite_3" ], "mid": [ "2439787475", "2171810632" ], "abstract": [ "We propose a novel algorithm for visual question answering based on a recurrent deep neural network, where every module in the network corresponds to a complete answering unit with attention mechanism by itself. The network is optimized by minimizing loss aggregated from all the units, which share model parameters while receiving different information to compute attention probability. For training, our model attends to a region within image feature map, updates its memory based on the question and attended image feature, and answers the question based on its memory state. This procedure is performed to compute loss in each step. The motivation of this approach is our observation that multi-step inferences are often required to answer questions while each problem may have a unique desirable number of steps, which is difficult to identify in practice. Hence, we always make the first unit in the network solve problems, but allow it to learn the knowledge from the rest of units by backpropagation unless it degrades the model. To implement this idea, we early-stop training each unit as soon as it starts to overfit. Note that, since more complex models tend to overfit on easier questions quickly, the last answering unit in the unfolded recurrent neural network is typically killed first while the first one remains last. We make a single-step prediction for a new question using the shared model. This strategy works better than the other options within our framework since the selected model is trained effectively from all units without overfitting. The proposed algorithm outperforms other multi-step attention based approaches using a single step prediction in VQA dataset.", "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer." ] }
1609.06657
2521524646
Visual Question Answering (VQA) task has showcased a new stage of interaction between language and vision, two of the most pivotal components of artificial intelligence. However, it has mostly focused on generating short and repetitive answers, mostly single words, which fall short of rich linguistic capabilities of humans. We introduce Full-Sentence Visual Question Answering (FSVQA) dataset, consisting of nearly 1 million pairs of questions and full-sentence answers for images, built by applying a number of rule-based natural language processing techniques to original VQA dataset and captions in the MS COCO dataset. This poses many additional complexities to conventional VQA task, and we provide a baseline for approaching and evaluating the task, on top of which we invite the research community to build further improvements.
@cite_10 argued that not only visual attention is important, but also question attention is important. Co-attention model was thus proposed to jointly decide where to attend visually and linguistically. @cite_13 introduced multimodal residual network (MRN), which uses element-wise multiplication for joint residual learning of attention models.
{ "cite_N": [ "@cite_13", "@cite_10" ], "mid": [ "2412393473", "2963668159" ], "abstract": [ "Deep neural networks continue to advance the state-of-the-art of image recognition tasks with various methods. However, applications of these methods to multimodality remain limited. We present Multimodal Residual Networks (MRN) for the multimodal residual learning of visual question-answering, which extends the idea of the deep residual learning. Unlike the deep residual learning, MRN effectively learns the joint representation from vision and language information. The main idea is to use element-wise multiplication for the joint residual mappings exploiting the residual learning of the attentional models in recent studies. Various alternative models introduced by multimodality are explored based on our study. We achieve the state-of-the-art results on the Visual QA dataset for both Open-Ended and Multiple-Choice tasks. Moreover, we introduce a novel method to visualize the attention effect of the joint representations for each learning block using back-propagation algorithm, even though the visual features are collapsed without spatial information.", "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling \"where to look\" or visual attention, it is equally important to model \"what words to listen to\" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3 to 60.5 , and from 61.6 to 63.3 on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1 for VQA and 65.4 for COCO-QA." ] }
1609.06290
2522155987
We study linear time model checking of collapsible higher-order pushdown systems (CPDS) of order 2 (manipulating stack of stacks) against MSO and PDL (propositional dynamic logic with converse and loop) enhanced with push pop matching relations. To capture these linear time behaviours with matchings, we propose order-2 nested words. These graphs consist of a word structure augmented with two binary matching relations, one for each order of stack, which relate a push with matching pops (or collapse) on the respective stack. Due to the matching relations, satisfiability and model checking are undecidable. Hence we propose an under-approximation, bounding the number of times an order-1 push can be popped. With this under-approximation, which still allows unbounded stack height, we get decidability for satisfiability and model checking of both MSO and PDL. The problems are ExpTime-Complete for PDL.
In @cite_24 , Broadbent studies nested structures of order-2 HOPDS. A suffix rewrite system that rewrites nested words is used to capture the graph of @math -closure of an order-2 HOPDS. The objective of the paper as well as the use of nested words is different from ours.
{ "cite_N": [ "@cite_24" ], "mid": [ "113452472" ], "abstract": [ "We introduce two natural variants of prefix rewriting on nested-words. One captures precisely the transition graphs of order-2 pushdown automata and the other precisely those of order-2 collapsible pushdown automata (2-CPDA). To our knowledge this is the first precise ‘external' characterisation of 2-CPDA graphs and demonstrates that the class is robust and hence interesting in its own right. The comparison with our characterisation for 2-PDA graphs also gives an idea of what ‘collapse means' in terms outside of higher-order automata theory. Additionally, a related construction gives us a decidability result for first-order logic on a natural subclass of 3-CPDA graphs, which in some sense is optimal." ] }
1609.06290
2522155987
We study linear time model checking of collapsible higher-order pushdown systems (CPDS) of order 2 (manipulating stack of stacks) against MSO and PDL (propositional dynamic logic with converse and loop) enhanced with push pop matching relations. To capture these linear time behaviours with matchings, we propose order-2 nested words. These graphs consist of a word structure augmented with two binary matching relations, one for each order of stack, which relate a push with matching pops (or collapse) on the respective stack. Due to the matching relations, satisfiability and model checking are undecidable. Hence we propose an under-approximation, bounding the number of times an order-1 push can be popped. With this under-approximation, which still allows unbounded stack height, we get decidability for satisfiability and model checking of both MSO and PDL. The problems are ExpTime-Complete for PDL.
s have close relation with nested trees. A nested tree @cite_26 @cite_20 is a tree with an additional binary relation such that every branch forms a well-nested word @cite_27 . It provides a visible'' representation of the branching behaviour of a pushdown system.
{ "cite_N": [ "@cite_27", "@cite_26", "@cite_20" ], "mid": [ "2131886132", "2135909702", "" ], "abstract": [ "We propose the model of nested words for representation of data with both a linear ordering and a hierarchically nested matching of items. Examples of data with such dual linear-hierarchical structure include executions of structured programs, annotated linguistic data, and HTML XML documents. Nested words generalize both words and ordered trees, and allow both word and tree operations. We define nested word automata—finite-state acceptors for nested words, and show that the resulting class of regular languages of nested words has all the appealing theoretical properties that the classical regular word languages enjoys: deterministic nested word automata are as expressive as their nondeterministic counterparts; the class is closed under union, intersection, complementation, concatenation, Kleene-*, prefixes, and language homomorphisms; membership, emptiness, language inclusion, and language equivalence are all decidable; and definability in monadic second order logic corresponds exactly to finite-state recognizability. We also consider regular languages of infinite nested words and show that the closure properties, MSO-characterization, and decidability of decision problems carry over. The linear encodings of nested words give the class of visibly pushdown languages of words, and this class lies between balanced languages and deterministic context-free languages. We argue that for algorithmic verification of structured programs, instead of viewing the program as a context-free language over words, one should view it as a regular language of nested words (or equivalently, a visibly pushdown language), and this would allow model checking of many properties (such as stack inspection, pre-post conditions) that are not expressible in existing specification logics. We also study the relationship between ordered trees and nested words, and the corresponding automata: while the analysis complexity of nested word automata is the same as that of classical tree automata, they combine both bottom-up and top-down traversals, and enjoy expressiveness and succinctness benefits over tree automata.", "We study languages of nested trees—structures obtained by augmenting trees with sets of nested jump-edges. These graphs can naturally model branching behaviors of pushdown programs, so that the problem of branching-time software model checking may be phrased as a membership question for such languages. We define finite-state automata accepting such languages—these automata can pass states along jump-edges as well as tree edges. We find that the model-checking problem for these automata on pushdown systems is EXPTIME-complete, and that their alternating versions are expressively equivalent to NT-μ, a recently proposed temporal logic for nested trees that can express a variety of branching-time, “context-free” requirements. We also show that monadic second order logic (MSO) cannot exploit the structure: MSO on nested trees is too strong in the sense that it has an undecidable model checking problem, and seems too weak to capture NT-μ.", "" ] }
1609.06532
2341146251
Bibliographic analysis considers the author's research areas, the citation network and the paper content among other things. In this paper, we combine these three in a topic model that produces a bibliographic model of authors, topics and documents, using a non-parametric extension of a combination of the Poisson mixed-topic link model and the author-topic model. This gives rise to the Citation Network Topic Model (CNTM). We propose a novel and efficient inference algorithm for the CNTM to explore subsets of research publications from CiteSeer @math X. The publication datasets are organised into three corpora, totalling to about 168k publications with about 62k authors. The queried datasets are made available online. In three publicly available corpora in addition to the queried datasets, our proposed model demonstrates an improved performance in both model fitting and document clustering, compared to several baselines. Moreover, our model allows extraction of additional useful knowledge from the corpora, such as the visualisation of the author-topics network. Additionally, we propose a simple method to incorporate supervision into topic modelling to achieve further improvement on the clustering task.
Latent Dirichlet Allocation (LDA) is the simplest Bayesian topic model used in modelling text, which also allows easy learning of the model. @cite_3 proposed the Hierarchical Dirichlet process (HDP) LDA, which utilises the Dirichlet process (DP) as a nonparametric prior which allows a non-symmetric, arbitrary dimensional topic prior to be used. Furthermore, one can replace the Dirichlet prior on the word vectors with the (PYP, also known as the two-parameter Poisson Dirichlet process) , which models the power-law of word frequency distributions in natural language , yielding significant improvement .
{ "cite_N": [ "@cite_3" ], "mid": [ "166614460" ], "abstract": [ "Hierarchical modeling is a fundamental concept in Bayesian statistics. The basic idea is that parameters are endowed with distributions which may themselves introduce new parameters, and this construction recurses. In this review we discuss the role of hierarchical modeling in Bayesian nonparametrics, focusing on models in which the infinite-dimensional parameters are treated hierarchically. For example, we consider a model in which the base measure for a Dirichlet process is itself treated as a draw from another Dirichlet process. This yields a natural recursion that we refer to as a hierarchical Dirichlet process. We also discuss hierarchies based on the Pitman-Yor process and on completely random processes. We demonstrate the value of these hierarchical constructions in a wide range of practical applications, in problems in computational biology, computer vision and natural language processing." ] }
1609.06532
2341146251
Bibliographic analysis considers the author's research areas, the citation network and the paper content among other things. In this paper, we combine these three in a topic model that produces a bibliographic model of authors, topics and documents, using a non-parametric extension of a combination of the Poisson mixed-topic link model and the author-topic model. This gives rise to the Citation Network Topic Model (CNTM). We propose a novel and efficient inference algorithm for the CNTM to explore subsets of research publications from CiteSeer @math X. The publication datasets are organised into three corpora, totalling to about 168k publications with about 62k authors. The queried datasets are made available online. In three publicly available corpora in addition to the queried datasets, our proposed model demonstrates an improved performance in both model fitting and document clustering, compared to several baselines. Moreover, our model allows extraction of additional useful knowledge from the corpora, such as the visualisation of the author-topics network. Additionally, we propose a simple method to incorporate supervision into topic modelling to achieve further improvement on the clustering task.
Variants of LDA allow incorporating more aspects of a particular task and here we consider authorship and citation information. The (ATM) uses the authorship information to restrict topic options based on author. Some recent work jointly models the document citation network and text content. This includes the , the (PMTLM) and . An extensive review of these models can be found in @cite_8 . The (CAT) model models the author-author network on publications based on citations using an extension of the ATM. Note that our work is different to CAT in that we model the author-document-citation network instead of author-author network.
{ "cite_N": [ "@cite_8" ], "mid": [ "1981467090" ], "abstract": [ "Many data sets contain rich information about objects, as well as pairwise relations between them. For instance, in networks of websites, scientific papers, and other documents, each node has content consisting of a collection of words, as well as hyperlinks or citations to other nodes. In order to perform inference on such data sets, and make predictions and recommendations, it is useful to have models that are able to capture the processes which generate the text at each node and the links between them. In this paper, we combine classic ideas in topic modeling with a variant of the mixed-membership block model recently developed in the statistical physics community. The resulting model has the advantage that its parameters, including the mixture of topics of each document and the resulting overlapping communities, can be inferred with a simple and scalable expectation-maximization algorithm. We test our model on three data sets, performing unsupervised topic classification and link prediction. For both tasks, our model outperforms several existing state-of-the-art methods, achieving higher accuracy with significantly less computation, analyzing a data set with 1.3 million words and 44 thousand links in a few minutes." ] }
1609.06532
2341146251
Bibliographic analysis considers the author's research areas, the citation network and the paper content among other things. In this paper, we combine these three in a topic model that produces a bibliographic model of authors, topics and documents, using a non-parametric extension of a combination of the Poisson mixed-topic link model and the author-topic model. This gives rise to the Citation Network Topic Model (CNTM). We propose a novel and efficient inference algorithm for the CNTM to explore subsets of research publications from CiteSeer @math X. The publication datasets are organised into three corpora, totalling to about 168k publications with about 62k authors. The queried datasets are made available online. In three publicly available corpora in addition to the queried datasets, our proposed model demonstrates an improved performance in both model fitting and document clustering, compared to several baselines. Moreover, our model allows extraction of additional useful knowledge from the corpora, such as the visualisation of the author-topics network. Additionally, we propose a simple method to incorporate supervision into topic modelling to achieve further improvement on the clustering task.
The jointly models author and text by using the distance between the document and author topic vectors. Similarly the Twitter-Network topic model models the author network The author network here corresponds to the Twitter follower network. based on author topic distributions, but using a Gaussian process to model the network. Note that our work considers the author-document-citation of @cite_9 . We use the PMTLM of @cite_8 to model the network, which lets one integrate PYP hierarchies with the PMTLM using efficient MCMC sampling.
{ "cite_N": [ "@cite_9", "@cite_8" ], "mid": [ "2130978632", "1981467090" ], "abstract": [ "Given a large-scale linked document collection, such as a collection of blog posts or a research literature archive, there are two fundamental problems that have generated a lot of interest in the research community. One is to identify a set of high-level topics covered by the documents in the collection; the other is to uncover and analyze the social network of the authors of the documents. So far these problems have been viewed as separate problems and considered independently from each other. In this paper we argue that these two problems are in fact inter-dependent and should be addressed together. We develop a Bayesian hierarchical approach that performs topic modeling and author community discovery in one unified framework. The effectiveness of our model is demonstrated on two blog data sets in different domains and one research paper citation data from CiteSeer.", "Many data sets contain rich information about objects, as well as pairwise relations between them. For instance, in networks of websites, scientific papers, and other documents, each node has content consisting of a collection of words, as well as hyperlinks or citations to other nodes. In order to perform inference on such data sets, and make predictions and recommendations, it is useful to have models that are able to capture the processes which generate the text at each node and the links between them. In this paper, we combine classic ideas in topic modeling with a variant of the mixed-membership block model recently developed in the statistical physics community. The resulting model has the advantage that its parameters, including the mixture of topics of each document and the resulting overlapping communities, can be inferred with a simple and scalable expectation-maximization algorithm. We test our model on three data sets, performing unsupervised topic classification and link prediction. For both tasks, our model outperforms several existing state-of-the-art methods, achieving higher accuracy with significantly less computation, analyzing a data set with 1.3 million words and 44 thousand links in a few minutes." ] }
1609.06532
2341146251
Bibliographic analysis considers the author's research areas, the citation network and the paper content among other things. In this paper, we combine these three in a topic model that produces a bibliographic model of authors, topics and documents, using a non-parametric extension of a combination of the Poisson mixed-topic link model and the author-topic model. This gives rise to the Citation Network Topic Model (CNTM). We propose a novel and efficient inference algorithm for the CNTM to explore subsets of research publications from CiteSeer @math X. The publication datasets are organised into three corpora, totalling to about 168k publications with about 62k authors. The queried datasets are made available online. In three publicly available corpora in addition to the queried datasets, our proposed model demonstrates an improved performance in both model fitting and document clustering, compared to several baselines. Moreover, our model allows extraction of additional useful knowledge from the corpora, such as the visualisation of the author-topics network. Additionally, we propose a simple method to incorporate supervision into topic modelling to achieve further improvement on the clustering task.
There is also existing work on analysing the degree of authors' influence. On publication data, @cite_2 and @cite_7 analyse influential authors with topic models, while @cite_1 , @cite_10 , and @cite_6 use topic models to analyse users' influence on social media.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_6", "@cite_2", "@cite_10" ], "mid": [ "2166463728", "2076219102", "2018984973", "2129028998", "2107559689" ], "abstract": [ "When browsing a digital library of research papers, it is natural to ask which authors are most influential in a particular topic. We present a probabilistic model that ranks authors based on their influence in particular areas of scientific research. This model combines several sources of information: citation information between documents as represented by PageRank scores, authorship data gathered through automatic information extraction, and the words in paper abstracts. We compare the performance of a topic model versus a smoothed language model by assessing the number of major award winners in the resulting ranked list of researchers.", "This paper focuses on the problem of identifying influential users of micro-blogging services. Twitter, one of the most notable micro-blogging services, employs a social-networking model called \"following\", in which each user can choose who she wants to \"follow\" to receive tweets from without requiring the latter to give permission first. In a dataset prepared for this study, it is observed that (1) 72.4 of the users in Twitter follow more than 80 of their followers, and (2) 80.5 of the users have 80 of users they are following follow them back. Our study reveals that the presence of \"reciprocity\" can be explained by phenomenon of homophily. Based on this finding, TwitterRank, an extension of PageRank algorithm, is proposed to measure the influence of users in Twitter. TwitterRank measures the influence taking both the topical similarity between users and the link structure into account. Experimental results show that TwitterRank outperforms the one Twitter currently uses and other related algorithms, including the original PageRank and Topic-sensitive PageRank.", "Influence is a complex and subtle force that governs the dynamics of social networks as well as the behaviors of involved users. Understanding influence can benefit various applications such as viral marketing, recommendation, and information retrieval. However, most existing works on social influence analysis have focused on verifying the existence of social influence. Few works systematically investigate how to mine the strength of direct and indirect influence between nodes in heterogeneous networks. To address the problem, we propose a generative graphical model which utilizes the heterogeneous link information and the textual content associated with each node in the network to mine topic-level direct influence. Based on the learned direct influence, a topic-level influence propagation and aggregation algorithm is proposed to derive the indirect influence between nodes. We further study how the discovered topic-level influence can help the prediction of user behaviors. We validate the approach on three different genres of data sets: Twitter, Digg, and citation networks. Qualitatively, our approach can discover interesting influence patterns in heterogeneous networks. Quantitatively, the learned topic-level influence can greatly improve the accuracy of user behavior prediction.", "In a document network such as a citation network of scientific documents, web-logs, etc., the content produced by authors exhibits their interest in certain topics. In addition some authors influence other authors' interests. In this work, we propose to model the influence of cited authors along with the interests of citing authors. Moreover, we hypothesize that apart from the citations present in documents, the context surrounding the citation mention provides extra topical information about the cited authors. However, associating terms in the context to the cited authors remains an open problem. We propose novel document generation schemes that incorporate the context while simultaneously modeling the interests of citing authors and influence of the cited authors. Our experiments show significant improvements over baseline models for various evaluation criteria such as link prediction between document and cited author, and quantitatively explaining unseen text.", "In large social networks, nodes (users, entities) are influenced by others for various reasons. For example, the colleagues have strong influence on one's work, while the friends have strong influence on one's daily life. How to differentiate the social influences from different angles(topics)? How to quantify the strength of those social influences? How to estimate the model on real large networks? To address these fundamental questions, we propose Topical Affinity Propagation (TAP) to model the topic-level social influence on large networks. In particular, TAP can take results of any topic modeling and the existing network structure to perform topic-level influence propagation. With the help of the influence analysis, we present several important applications on real data sets such as 1) what are the representative nodes on a given topic? 2) how to identify the social influences of neighboring nodes on a particular node? To scale to real large networks, TAP is designed with efficient distributed learning algorithms that is implemented and tested under the Map-Reduce framework. We further present the common characteristics of distributed learning algorithms for Map-Reduce. Finally, we demonstrate the effectiveness and efficiency of TAP on real large data sets." ] }
1609.06490
2522770641
Neural machine translation (NMT) becomes a new state-of-the-art and achieves promising translation results using a simple encoder-decoder neural network. This neural network is trained once on the parallel corpus and the fixed network is used to translate all the test sentences. We argue that the general fixed network cannot best fit the specific test sentences. In this paper, we propose the dynamic NMT which learns a general network as usual, and then fine-tunes the network for each test sentence. The fine-tune work is done on a small set of the bilingual training data that is obtained through similarity search according to the test sentence. Extensive experiments demonstrate that this method can significantly improve the translation performance, especially when highly similar sentences are available.
Recent advances in NMT include fixing defects of the model, such as inability to use large vocabulary @cite_11 @cite_17 , unawareness of coverage @cite_9 @cite_3 etc, making use of mono-lingual data @cite_10 @cite_16 , extending to multi-lingual @cite_12 @cite_15 and multi-modal @cite_2 scenarios.
{ "cite_N": [ "@cite_11", "@cite_9", "@cite_3", "@cite_2", "@cite_15", "@cite_16", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2950580142", "2410539690", "2522143790", "2293344577", "", "2284660317", "2422843715", "2251743902", "2950344723" ], "abstract": [ "Neural Machine Translation (NMT) is a new approach to machine translation that has shown promising results that are comparable to traditional approaches. A significant weakness in conventional NMT systems is their inability to correctly translate very rare words: end-to-end NMTs tend to have relatively small vocabularies with a single unk symbol that represents every possible out-of-vocabulary (OOV) word. In this paper, we propose and implement an effective technique to address this problem. We train an NMT system on data that is augmented by the output of a word alignment algorithm, allowing the NMT system to emit, for each OOV word in the target sentence, the position of its corresponding word in the source sentence. This information is later utilized in a post-processing step that translates every OOV word using a dictionary. Our experiments on the WMT14 English to French translation task show that this method provides a substantial improvement of up to 2.8 BLEU points over an equivalent NMT system that does not use this technique. With 37.5 BLEU points, our NMT system is the first to surpass the best result achieved on a WMT14 contest task.", "Attention mechanism has enhanced state-of-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT.", "In this paper, we enhance the attention-based neural machine translation (NMT) by adding explicit coverage embedding models to alleviate issues of repeating and dropping translations in NMT. For each source word, our model starts with a full coverage embedding vector to track the coverage status, and then keeps updating it with neural networks as the translation goes. Experiments on the large-scale Chinese-to-English task show that our enhanced model improves the translation quality significantly on various test sets over the strong large vocabulary NMT system.", "We present an approach to improve statistical machine translation of image descriptions by multimodal pivots defined in visual space. The key idea is to perform image retrieval over a database of images that are captioned in the target language, and use the captions of the most similar images for crosslingual reranking of translation outputs. Our approach does not depend on the availability of large amounts of in-domain parallel data, but only relies on available large datasets of monolingually captioned images, and on state-of-the-art convolutional neural networks to compute image similarities. Our experimental evaluation shows improvements of 1 BLEU point over strong baselines.", "", "Neural Machine Translation (NMT) has obtained state-of-the art performance for several language pairs, while only using parallel data for training. Target-side monolingual data plays an important role in boosting fluency for phrase-based statistical machine translation, and we investigate the use of monolingual data for NMT. In contrast to previous work, which combines NMT models with separately trained language models, we note that encoder-decoder NMT architectures already have the capacity to learn the same information as a language model, and we explore strategies to train with monolingual data without changing the neural network architecture. By pairing monolingual training data with an automatic back-translation, we can treat it as additional parallel training data, and we obtain substantial improvements on the WMT 15 task English German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We also show that fine-tuning on in-domain monolingual and parallel data gives substantial improvements for the IWSLT 15 task English->German.", "While end-to-end neural machine translation (NMT) has made remarkable progress recently, NMT systems only rely on parallel corpora for parameter estimation. Since parallel corpora are usually limited in quantity, quality, and coverage, especially for low-resource languages, it is appealing to exploit monolingual corpora to improve NMT. We propose a semi-supervised approach for training NMT models on the concatenation of labeled (parallel corpora) and unlabeled (monolingual corpora) data. The central idea is to reconstruct the monolingual corpora using an autoencoder, in which the source-to-target and target-to-source translation models serve as the encoder and decoder, respectively. Our approach can not only exploit the monolingual corpora of the target language, but also of the source language. Experiments on the Chinese-English dataset show that our approach achieves significant improvements over state-of-the-art SMT and NMT systems.", "In this paper, we investigate the problem of learning a machine translation model that can simultaneously translate sentences from one source language to multiple target languages. Our solution is inspired by the recently proposed neural machine translation model which generalizes machine translation as a sequence learning problem. We extend the neural machine translation to a multi-task learning framework which shares source language representation and separates the modeling of different target language translation. Our framework can be applied to situations where either large amounts of parallel data or limited parallel data is available. Experiments show that our multi-task learning model is able to achieve significantly higher translation quality over individually learned model in both situations on the data sets publicly available.", "Neural machine translation, a recently proposed approach to machine translation based purely on neural networks, has shown promising results compared to the existing approaches such as phrase-based statistical machine translation. Despite its recent success, neural machine translation has its limitation in handling a larger vocabulary, as training complexity as well as decoding complexity increase proportionally to the number of target words. In this paper, we propose a method that allows us to use a very large target vocabulary without increasing training complexity, based on importance sampling. We show that decoding can be efficiently done even with the model having a very large target vocabulary by selecting only a small subset of the whole target vocabulary. The models trained by the proposed approach are empirically found to outperform the baseline models with a small vocabulary as well as the LSTM-based neural machine translation models. Furthermore, when we use the ensemble of a few models with very large target vocabularies, we achieve the state-of-the-art translation performance (measured by BLEU) on the English->German translation and almost as high performance as state-of-the-art English->French translation system." ] }
1609.06490
2522770641
Neural machine translation (NMT) becomes a new state-of-the-art and achieves promising translation results using a simple encoder-decoder neural network. This neural network is trained once on the parallel corpus and the fixed network is used to translate all the test sentences. We argue that the general fixed network cannot best fit the specific test sentences. In this paper, we propose the dynamic NMT which learns a general network as usual, and then fine-tunes the network for each test sentence. The fine-tune work is done on a small set of the bilingual training data that is obtained through similarity search according to the test sentence. Extensive experiments demonstrate that this method can significantly improve the translation performance, especially when highly similar sentences are available.
In statistical machine translation, there are some work making use of similar sentences by means of translation memory @cite_4 @cite_8 @cite_7 @cite_1 . However, they need carefully designed features and only show improvement when similarity level is high. In comparison, our method don't need any modification to the model, and it can bring improvement in all similarity level.
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_7", "@cite_8" ], "mid": [ "", "832270446", "2126712675", "2131969445" ], "abstract": [ "", "We present two methods that merge ideas from statistical machine translation (SMT) and translation memories (TM). We use a TM to retrieve matches for source segments, and replace the mismatched parts with instructions to an SMT system to fill in the gap. We show that for fuzzy matches of over 70 , one method outperforms both SMT and TM baselines.", "Since statistical machine translation (SMT) and translation memory (TM) complement each other in matched and unmatched regions, integrated models are proposed in this paper to incorporate TM information into phrase-based SMT. Unlike previous multi-stage pipeline approaches, which directly merge TM result into the final output, the proposed models refer to the corresponding TM information associated with each phrase at SMT decoding. On a Chinese–English TM database, our experiments show that the proposed integrated Model-III is significantly better than either the SMT or the TM systems when the fuzzy match score is above 0.4. Furthermore, integrated Model-III achieves overall 3.48 BLEU points improvement and 2.62 TER points reduction in comparison with the pure SMT system. Besides, the proposed models also outperform previous approaches significantly.", "We present a discriminative learning method to improve the consistency of translations in phrase-based Statistical Machine Translation (SMT) systems. Our method is inspired by Translation Memory (TM) systems which are widely used by human translators in industrial settings. We constrain the translation of an input sentence using the most similar 'translation example' retrieved from the TM. Differently from previous research which used simple fuzzy match thresholds, these constraints are imposed using discriminative learning to optimise the translation performance. We observe that using this method can benefit the SMT system by not only producing consistent translations, but also improved translation outputs. We report a 0.9 point improvement in terms of BLEU score on English--Chinese technical documents." ] }
1609.06490
2522770641
Neural machine translation (NMT) becomes a new state-of-the-art and achieves promising translation results using a simple encoder-decoder neural network. This neural network is trained once on the parallel corpus and the fixed network is used to translate all the test sentences. We argue that the general fixed network cannot best fit the specific test sentences. In this paper, we propose the dynamic NMT which learns a general network as usual, and then fine-tunes the network for each test sentence. The fine-tune work is done on a small set of the bilingual training data that is obtained through similarity search according to the test sentence. Extensive experiments demonstrate that this method can significantly improve the translation performance, especially when highly similar sentences are available.
Finding similar sentences with inverted index is fast enough in our experiments. If the training data is much larger than ours, locality sensitive hash such as MinHash @cite_13 may be a better choice.
{ "cite_N": [ "@cite_13" ], "mid": [ "2132069633" ], "abstract": [ "Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of \"roughly the same\" and \"roughly contained.\" The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints." ] }
1609.06377
2521071105
We consider the problem of next frame prediction from video input. A recurrent convolutional neural network is trained to predict depth from monocular video input, which, along with the current video image and the camera trajectory, can then be used to compute the next frame. Unlike prior next-frame prediction approaches, we take advantage of the scene geometry and use the predicted depth for generating the next frame prediction. Our approach can produce rich next frame predictions which include depth information attached to each pixel. Another novel aspect of our approach is that it predicts depth from a sequence of images (e.g. in a video), rather than from a single still image. We evaluate the proposed approach on the KITTI dataset, a standard dataset for benchmarking tasks relevant to autonomous driving. The proposed method produces results which are visually and numerically superior to existing methods that directly predict the next frame. We show that the accuracy of depth prediction improves as more prior frames are considered.
Scene understanding @cite_16 is a central topic in computer vision with problems including object detection @cite_17 @cite_31 @cite_0 , tracking @cite_14 @cite_24 , segmentation @cite_8 @cite_26 , and scene reconstruction @cite_21 .
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_8", "@cite_21", "@cite_0", "@cite_24", "@cite_31", "@cite_16", "@cite_17" ], "mid": [ "1994541145", "2509033610", "33116912", "2257979135", "", "", "1650122911", "", "2031454541" ], "abstract": [ "A novel system for detection and tracking of vehicles from a single car-mounted camera is presented. The core of the system are high-performance vision algorithms: the WaldBoost detector [1] and the TLD tracker [2] that are scheduled so that a real-time performance is achieved. The vehicle monitoring system is evaluated on a new dataset collected on Italian motorways which is provided with approximate ground truth (GT0) obtained from laser scans. For a wide range of distances, the recall and precision of detection for cars are excellent. Statistics for trucks are also reported. The dataset with the ground truth is made public.", "In this paper we present Semantic Stixels, a novel vision-based scene model geared towards automated driving. Our model jointly infers the geometric and semantic layout of a scene and provides a compact yet rich abstraction of both cues using Stixels as primitive elements. Geometric information is incorporated into our model in terms of pixel-level disparity maps derived from stereo vision. For semantics, we leverage a modern deep learning-based scene labeling approach that provides an object class label for each pixel. Our experiments involve an in-depth analysis and a comprehensive assessment of the constituent parts of our approach using three public benchmark datasets. We evaluate the geometric and semantic accuracy of our model and analyze the underlying run-times and the complexity of the obtained representation. Our results indicate that the joint treatment of both cues on the Semantic Stixel level yields a highly compact environment representation while maintaining an accuracy comparable to the two individual pixel-level input data sources. Moreover, our framework compares favorably to related approaches in terms of computational costs and operates in real-time.", "Road scene segmentation is important in computer vision for different applications such as autonomous driving and pedestrian detection. Recovering the 3D structure of road scenes provides relevant contextual information to improve their understanding. In this paper, we use a convolutional neural network based algorithm to learn features from noisy labels to recover the 3D scene layout of a road image. The novelty of the algorithm relies on generating training labels by applying an algorithm trained on a general image dataset to classify on–board images. Further, we propose a novel texture descriptor based on a learned color plane fusion to obtain maximal uniformity in road areas. Finally, acquired (off–line) and current (on–line) information are combined to detect road areas in single images. From quantitative and qualitative experiments, conducted on publicly available datasets, it is concluded that convolutional neural networks are suitable for learning 3D scene layout from noisy labels and provides a relative improvement of 7 compared to the baseline. Furthermore, combining color planes provides a statistical description of road areas that exhibits maximal uniformity and provides a relative improvement of 8 compared to the baseline. Finally, the improvement is even bigger when acquired and current information from a single image are combined.", "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8 winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.", "", "", "Paper-by-paper results make it easy to miss the forest for the trees.We analyse the remarkable progress of the last decade by dis- cussing the main ideas explored in the 40+ detectors currently present in the Caltech pedestrian detection benchmark. We observe that there exist three families of approaches, all currently reaching similar detec- tion quality. Based on our analysis, we study the complementarity of the most promising ideas by combining multiple published strategies. This new decision forest detector achieves the current best known performance on the challenging Caltech-USA dataset.", "", "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians." ] }
1609.06377
2521071105
We consider the problem of next frame prediction from video input. A recurrent convolutional neural network is trained to predict depth from monocular video input, which, along with the current video image and the camera trajectory, can then be used to compute the next frame. Unlike prior next-frame prediction approaches, we take advantage of the scene geometry and use the predicted depth for generating the next frame prediction. Our approach can produce rich next frame predictions which include depth information attached to each pixel. Another novel aspect of our approach is that it predicts depth from a sequence of images (e.g. in a video), rather than from a single still image. We evaluate the proposed approach on the KITTI dataset, a standard dataset for benchmarking tasks relevant to autonomous driving. The proposed method produces results which are visually and numerically superior to existing methods that directly predict the next frame. We show that the accuracy of depth prediction improves as more prior frames are considered.
A few methods @cite_33 @cite_4 @cite_22 @cite_18 @cite_3 have demonstrated learning depth from a single image using deep neural networks. Eigen and Fergus @cite_33 use a multi-scale setup to predict depth at multiple resolutions, whereas @cite_18 uses deeper models to improve the quality of predictions. There are also pure geometry-based approaches @cite_15 that estimate depth from multiple images. Similarly, our approach uses a sequence of images for better depth estimation, but in a learning-based setting.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_33", "@cite_3", "@cite_15" ], "mid": [ "2963591054", "2339763956", "1915250530", "2951713345", "2300779272", "2061458897" ], "abstract": [ "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.", "This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset \"Depth in the Wild\" consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild.", "Depth estimation and semantic segmentation are two fundamental problems in image understanding. While the two tasks are strongly correlated and mutually beneficial, they are usually solved separately or sequentially. Motivated by the complementary properties of the two tasks, we propose a unified framework for joint depth and semantic prediction. Given an image, we first use a trained Convolutional Neural Network (CNN) to jointly predict a global layout composed of pixel-wise depth values and semantic labels. By allowing for interactions between the depth and semantic information, the joint network provides more accurate depth prediction than a state-of-the-art CNN trained solely for depth prediction [6]. To further obtain fine-level details, the image is decomposed into local segments for region-level depth and semantic prediction under the guidance of global layout. Utilizing the pixel-wise global prediction and region-wise local prediction, we formulate the inference problem in a two-layer Hierarchical Conditional Random Field (HCRF) to produce the final depth and semantic map. As demonstrated in the experiments, our approach effectively leverages the advantages of both tasks and provides the state-of-the-art results.", "In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.", "A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground-truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset gives comparable performance to that of the state-of-the-art supervised methods for single view depth estimation.", "We present an approach to jointly estimating camera motion and dense structure of a static scene in terms of depth maps from monocular image sequences in driver-assistance scenarios. At each instant of time, only two consecutive frames are processed as input data of a joint estimator that fully exploits second-order information of the corresponding optimization problem and effectively copes with the non-convexity due to both the imaging geometry and the manifold of motion parameters. Additionally, carefully designed Gaussian approximations enable probabilistic inference based on locally varying confidence and globally varying sensitivity due to the epipolar geometry, with respect to the high-dimensional depth map estimation. Embedding the resulting joint estimator in an online recursive framework achieves a pronounced spatio-temporal filtering effect and robustness. We evaluate hundreds of images taken from a car moving at speed up to 100 km h and being part of a publicly available benchmark data set. The results compare favorably with two alternative settings: stereo based scene reconstruction and camera motion estimation in batch mode using multiple frames. They, however, require a calibrated camera pair or storage for more than two frames, which is less attractive from a technical viewpoint than the proposed monocular and recursive approach. In addition to real data, a synthetic sequence is considered which provides reliable ground truth." ] }
1609.06377
2521071105
We consider the problem of next frame prediction from video input. A recurrent convolutional neural network is trained to predict depth from monocular video input, which, along with the current video image and the camera trajectory, can then be used to compute the next frame. Unlike prior next-frame prediction approaches, we take advantage of the scene geometry and use the predicted depth for generating the next frame prediction. Our approach can produce rich next frame predictions which include depth information attached to each pixel. Another novel aspect of our approach is that it predicts depth from a sequence of images (e.g. in a video), rather than from a single still image. We evaluate the proposed approach on the KITTI dataset, a standard dataset for benchmarking tasks relevant to autonomous driving. The proposed method produces results which are visually and numerically superior to existing methods that directly predict the next frame. We show that the accuracy of depth prediction improves as more prior frames are considered.
Unsupervised learning from large unlabeled video datasets has been a topic of recent interest @cite_23 @cite_9 @cite_13 @cite_10 . The works in @cite_1 @cite_10 @cite_5 @cite_29 use neural networks for next frame prediction in video. These methods typically use a loss function based on the RGB values of the pixels in the predicted image. This results in conservative and blurry predictions where the pixel values are close to the target values, but rarely identical to them. In contrast, our proposed method produces images whose RGB distribution is very close to the target next frame. Such an output is more suitable for detecting anomalies or surprising outcomes where the predicted next frame does not match the future state.
{ "cite_N": [ "@cite_13", "@cite_9", "@cite_29", "@cite_1", "@cite_23", "@cite_5", "@cite_10" ], "mid": [ "2118688707", "2422305492", "2963402657", "1568514080", "2952453038", "", "2248556341" ], "abstract": [ "Motivated by vision-based reinforcement learning (RL) problems, in particular Atari games from the recent benchmark Aracade Learning Environment (ALE), we consider spatio-temporal prediction problems where future (image-)frames are dependent on control variables or actions as well as previous frames. While not composed of natural scenes, frames in Atari games are high-dimensional in size, can involve tens of objects with one or more objects being controlled by the actions directly and many other objects being influenced indirectly, can involve entry and departure of objects, and can involve deep partial observability. We propose and evaluate two deep neural network architectures that consist of encoding, action-conditional transformation, and decoding layers based on convolutional neural networks and recurrent neural networks. Experimental results show that the proposed architectures are able to generate visually-realistic frames that are also useful for control over approximately 100-step action-conditional futures in some games. To the best of our knowledge, this paper is the first to make and evaluate long-term predictions on high-dimensional video conditioned by control inputs.", "Anticipating actions and objects before they start or appear is a difficult problem in computer vision with several real-world applications. This task is challenging partly because it requires leveraging extensive knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently learning this knowledge is through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising prediction target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future.", "A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment. Many existing methods for learning the dynamics of physical interactions require labeled object information. However, to scale real-world interaction learning to a variety of scenes and objects, acquiring labeled data becomes increasingly impractical. To learn about physical object motion without labels, we develop an action-conditioned video prediction model that explicitly models pixel motion, by predicting a distribution over pixel motion from previous frames. Because our model explicitly predicts motion, it is partially invariant to object appearance, enabling it to generalize to previously unseen objects. To explore video prediction for real-world interactive agents, we also introduce a dataset of 59,000 robot interactions involving pushing motions, including a test set with novel objects. In this dataset, accurate prediction of videos conditioned on the robot's future actions amounts to learning a \"visual imagination\" of different futures based on different courses of action. Our experiments show that our proposed method produces more accurate video predictions both quantitatively and qualitatively, when compared to prior methods.", "We propose a strong baseline model for unsupervised feature learning using video data. By learning to predict missing frames or extrapolate future frames from an input video sequence, the model discovers both spatial and temporal correlations which are useful to represent complex deformations and motion patterns. The models we propose are largely borrowed from the language modeling literature, and adapted to the vision domain by quantizing the space of image patches into a large dictionary. We demonstrate the approach on both a filling and a generation task. For the first time, we show that, after training on natural videos, such a model can predict non-trivial motions over short video sequences.", "We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.", "", "Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset" ] }
1609.06357
2521206035
This paper concerns solving the sparse deconvolution and demixing problem using l1,2-minimization. We show that under a certain structured random model, robust and stable recovery is possible. The results extend results of Ling and Strohmer (Inverse Probl. 31, 115002 2015), and in particular theoretically explain certain experimental findings from that paper. Our results do not only apply to the deconvolution and demixing problem, but to recovery of column-sparse matrices in general.
However, as was pointed out in (Ling and Strohmer 2015 @cite_1 ), numerical experiments show that both @math - and @math -minimization perform significantly better than nuclear norm minimization when it comes to recovering matrices with the structure described above. This is not hard to argue heuristically: Assuming @math , the above approach tries to recover a @math -matrix of rank @math , which needs an order of @math measurements. On the other hand @math (or @math ) tries to recover an @math -column sparse (or @math -sparse) @math matrix , which only needs @math . For really small sparsities and moderate @math , @math can be smaller than @math . Therefore, the authors of the mentioned paper concentrate their efforts, as will we, on analysing the @math -minimization (and @math -minimization, respectively).
{ "cite_N": [ "@cite_1" ], "mid": [ "67860792" ], "abstract": [ "The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely self-calibration, compressive sensing, and biconvex optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations where both and the diagonal matrix (which models the calibration error) are unknown. By 'lifting' this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both and can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis." ] }
1609.06357
2521206035
This paper concerns solving the sparse deconvolution and demixing problem using l1,2-minimization. We show that under a certain structured random model, robust and stable recovery is possible. The results extend results of Ling and Strohmer (Inverse Probl. 31, 115002 2015), and in particular theoretically explain certain experimental findings from that paper. Our results do not only apply to the deconvolution and demixing problem, but to recovery of column-sparse matrices in general.
As observant readers already may have pointed out, the ''true dimension'' of the problem of recovering a matrix @math with @math and @math @math -sparse is neither @math nor @math , but instead @math (a mathematically precise statement of this claim is provided in (Kech and Krahmer 2016 @cite_14 )). As of today, to the best knowledge of the author, there are no convex minimization procedures which succeed which such few measurements. In this context, ( Bresler, Lee and Wu 2013 @cite_8 ) should be mentioned. In that paper, the authors describe a alternating minimization procedure which under some additional conditions on the vectors @math and @math succeed with high probability already when @math is of the order of @math .
{ "cite_N": [ "@cite_14", "@cite_8" ], "mid": [ "2302736609", "1600586145" ], "abstract": [ "We study identifiability for bilinear inverse problems under sparsity and subspace constraints. We show that, up to a global scaling ambiguity, almost all such maps are injective on the set of pairs of sparse vectors if the number of measurements @math exceeds @math , where @math and @math denote the sparsity of the two input vectors, and injective on the set of pairs of vectors lying in known subspaces of dimensions @math and @math if @math . We also prove that both these bounds are tight in the sense that one cannot have injectivity for a smaller number of measurements. Our proof technique draws from algebraic geometry. As an application we derive optimal identifiability conditions for the deconvolution problem, thus improving on recent work of [1].", "Compressed sensing of simultaneously sparse and rank-one matrices enables recovery of sparse signals from a few linear measurements of their bilinear form. One important question is how many measurements are needed for a stable reconstruction in the presence of measurement noise. Unlike the conventional compressed sensing for sparse vectors, where convex relaxation via the @math -norm achieves near optimal performance, for compressed sensing of sparse and rank-one matrices, recently it has been shown by that convex programmings using the nuclear norm and the mixed norm are highly suboptimal even in the noise-free scenario. We propose an alternating minimization algorithm called sparse power factorization (SPF) for compressed sensing of sparse rank-one matrices. Starting from a particular initialization, SPF achieves stable recovery and requires number of measurements within a logarithmic factor of the information-theoretic fundamental limit. For fast-decaying sparse signals, SPF starting from an initialization with low computational cost also achieves stable reconstruction with the same number of measurements. Numerical results show that SPF empirically outperforms the best known combinations of mixed norm and nuclear norm." ] }
1609.06327
2951091111
The increased availability of interactive maps on the Internet and on personal mobile devices has created new challenges in computational cartography and, in particular, for label placement in maps. Operations like rotation, zoom, and translation dynamically change the map over time and make a consistent adaptation of the map labeling necessary. In this paper, we consider map labeling for the case that a map undergoes a sequence of operations over a specified time span. We unify and generalize several preceding models for dynamic map labeling into one versatile and flexible model. In contrast to previous research, we completely abstract from the particular operations (e.g., zoom, rotation, etc.) and express the labeling problem as a set of time intervals representing the labels' presences, activities, and conflicts. The model's strength is manifested in its simplicity and broad range of applications. In particular, it supports label selection both for map features with fixed position as well as for moving entities (e.g., for tracking vehicles in logistics or air traffic control). Through extensive experiments on OpenStreetMap data, we evaluate our model using algorithms of varying complexity as a case study for navigation systems. Our experiments show that even simple (and thus, fast) algorithms achieve near-optimal solutions in our model with respect to an intuitive objective function.
In 2006, @cite_12 introduced the first formal model for dynamic maps and dynamic labels, formulating a general optimization problem. They described the change of a map by the operations , , and . In order to avoid and labels while transforming the map with zooming and panning, they required four desiderata for dynamic map labeling. These comprise , that labels should not vanish when zooming in or appear when zooming out (or any of the two when panning), , where label positions and size remain invariant during movement, and ---placement and selection of labels should be a function of the current map state only. Monotonicity was modeled as selecting for each label at most one scale interval, the so-called , during which the label is displayed. They introduced the (ARO) maximizing the sum of active ranges over all labels such that no two labels overlap and all desiderata are fulfilled. They proved that ARO is NP-hard for star-shaped labels and presented an optimal greedy algorithm for a simplified variant.
{ "cite_N": [ "@cite_12" ], "mid": [ "2138350120" ], "abstract": [ "We address the problem of filtering, selecting and placing labels on a dynamic map, which is characterized by continuous zooming and panning capabilities. This consists of two interrelated issues. The first is to avoid label popping and other artifacts that cause confusion and interrupt navigation, and the second is to label at interactive speed. In most formulations the static map labeling problem is NP-hard, and a fast approximation might have O(n log n) complexity. Even this is too slow during interaction, when the number of labels shown can be several orders of magnitude less than the number in the map. In this paper we introduce a set of desiderata for \"consistent\" dynamic map labeling, which has qualities desirable for navigation. We develop a new framework for dynamic labeling that achieves the desiderata and allows for fast interactive display by moving all of the selection and placement decisions into the preprocessing phase. This framework is general enough to accommodate a variety of selection and placement algorithms. It does not appear possible to achieve our desiderata using previous frameworks. Prior to this paper, there were no formal models of dynamic maps or of dynamic labels; our paper introduces both. We formulate a general optimization problem for dynamic map labeling and give a solution to a simple version of the problem. The simple version is based on label priorities and a versatile and intuitive class of dynamic label placements we call \"invariant point placements\". Despite these restrictions, our approach gives a useful and practical solution. Our implementation is incorporated into the G-Vis system which is a full-detail dynamic map of the continental USA. This demo is available through any browser" ] }
1609.06327
2951091111
The increased availability of interactive maps on the Internet and on personal mobile devices has created new challenges in computational cartography and, in particular, for label placement in maps. Operations like rotation, zoom, and translation dynamically change the map over time and make a consistent adaptation of the map labeling necessary. In this paper, we consider map labeling for the case that a map undergoes a sequence of operations over a specified time span. We unify and generalize several preceding models for dynamic map labeling into one versatile and flexible model. In contrast to previous research, we completely abstract from the particular operations (e.g., zoom, rotation, etc.) and express the labeling problem as a set of time intervals representing the labels' presences, activities, and conflicts. The model's strength is manifested in its simplicity and broad range of applications. In particular, it supports label selection both for map features with fixed position as well as for moving entities (e.g., for tracking vehicles in logistics or air traffic control). Through extensive experiments on OpenStreetMap data, we evaluate our model using algorithms of varying complexity as a case study for navigation systems. Our experiments show that even simple (and thus, fast) algorithms achieve near-optimal solutions in our model with respect to an intuitive objective function.
That model was the point of departure for several subsequent papers considering the operations , and , mostly independently. @cite_7 took a closer look at different variants of ARO for zooming. They showed NP-hardness and gave approximation algorithms. In the same manner further variants were investigated by @cite_5 . @cite_15 presented a fully polynomial-time approximation scheme (FPTAS) for a special case of ARO, where the given map is one-dimensional and only zooming is allowed. However, they combined the selection problem with a placement problem in a slider model. @cite_3 also considered the model of @cite_12 for zooming, however, instead of maximizing the total sum of active ranges, they maximized the minimum active range among all labels. They discussed similar variants as @cite_5 and @cite_7 , also proving NP-hardness and giving approximation algorithms.
{ "cite_N": [ "@cite_7", "@cite_3", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "2099151901", "", "200670239", "2182924349", "2138350120" ], "abstract": [ "Map labeling encounters unique issues in the context of dynamic maps with continuous zooming and panning-an application with increasing practical importance. In consistent dynamic map labeling, distracting behavior such as popping and jumping is avoided. In the model for consistent dynamic labeling that we use, a label becomes a 3d-solid, with scale as the third dimension. Each solid can be truncated to a single scale interval, called its active range, corresponding to the scales at which the label will be selected. The active range optimization (ARO) problem is to select active ranges so that no two truncated solids overlap and the sum of the heights of the active ranges is maximized. The simple ARO problem is a variant in which the active ranges are restricted so that a label is never deselected when zooming in. We investigate both the general and simple variants, for 1d- as well as 2d-maps. The 1d-problem can be seen as a scheduling problem with geometric constraints, and is also closely related to geometric maximum independent set problems. Different label shapes define different ARO variants. We show that 2d-ARO and general 1d-ARO are NP-complete, even for quite simple shapes. We solve simple 1d-ARO optimally with dynamic programming, and present a toolbox of algorithms that yield constant-factor approximations for a number of 1d- and 2d-variants.", "", "We consider the dynamic map labeling problem: given a set of rectangular labels on the map, the goal is to appropriately select visible ranges for all the labels such that no two consistent labels overlap at every scale and the sum of total visible ranges is maximized. We propose approximation algorithms for several variants of this problem. For the simple ARO problem, we provide a 3c logn-approximation algorithm for the unit-width rectangular labels if there is a c-approximation algorithm for unit-width label placement problem in the plane; and a randomized polynomial-time O(logn loglogn)-approximation algorithm for arbitrary rectangular labels. For the general ARO problem, we prove that it is NP-complete even for congruent square labels with equal selectable scale range. Moreover, we contribute 12-approximation algorithms for both arbitrary square labels and unit-width rectangular labels, and a 6-approximation algorithm for congruent square labels.", "We study a dynamic labeling problem for points on a line that is closely related to labeling of zoomable maps. Typically, labels have a constant size on screen, which means that, as the scale of the map decreases during zooming, the labels grow relatively to the set of points, and conicts may occur due to overlapping labels. Our algorithmic problem is a combined dynamic selection and placement problem in a sliding-label model: (i) select for each label ‘ a contiguous active range of map scales at which ‘ is displayed, and (ii) place each label at an appropriate position relative to its anchor point by sliding it along the point. The active range optimization (ARO) problem is to select active ranges and slider positions so that no two labels intersect at any scale and the sum of the lengths of active ranges is maximized. We present a dynamic programming algorithm to solve the discrete k-position ARO problem optimally and an FPTAS for the continuous sliding ARO problem.", "We address the problem of filtering, selecting and placing labels on a dynamic map, which is characterized by continuous zooming and panning capabilities. This consists of two interrelated issues. The first is to avoid label popping and other artifacts that cause confusion and interrupt navigation, and the second is to label at interactive speed. In most formulations the static map labeling problem is NP-hard, and a fast approximation might have O(n log n) complexity. Even this is too slow during interaction, when the number of labels shown can be several orders of magnitude less than the number in the map. In this paper we introduce a set of desiderata for \"consistent\" dynamic map labeling, which has qualities desirable for navigation. We develop a new framework for dynamic labeling that achieves the desiderata and allows for fast interactive display by moving all of the selection and placement decisions into the preprocessing phase. This framework is general enough to accommodate a variety of selection and placement algorithms. It does not appear possible to achieve our desiderata using previous frameworks. Prior to this paper, there were no formal models of dynamic maps or of dynamic labels; our paper introduces both. We formulate a general optimization problem for dynamic map labeling and give a solution to a simple version of the problem. The simple version is based on label priorities and a versatile and intuitive class of dynamic label placements we call \"invariant point placements\". Despite these restrictions, our approach gives a useful and practical solution. Our implementation is incorporated into the G-Vis system which is a full-detail dynamic map of the continental USA. This demo is available through any browser" ] }
1609.06327
2951091111
The increased availability of interactive maps on the Internet and on personal mobile devices has created new challenges in computational cartography and, in particular, for label placement in maps. Operations like rotation, zoom, and translation dynamically change the map over time and make a consistent adaptation of the map labeling necessary. In this paper, we consider map labeling for the case that a map undergoes a sequence of operations over a specified time span. We unify and generalize several preceding models for dynamic map labeling into one versatile and flexible model. In contrast to previous research, we completely abstract from the particular operations (e.g., zoom, rotation, etc.) and express the labeling problem as a set of time intervals representing the labels' presences, activities, and conflicts. The model's strength is manifested in its simplicity and broad range of applications. In particular, it supports label selection both for map features with fixed position as well as for moving entities (e.g., for tracking vehicles in logistics or air traffic control). Through extensive experiments on OpenStreetMap data, we evaluate our model using algorithms of varying complexity as a case study for navigation systems. Our experiments show that even simple (and thus, fast) algorithms achieve near-optimal solutions in our model with respect to an intuitive objective function.
@cite_6 @cite_8 extended the ARO model to operations. They first showed that the ARO problem is NP-hard in that setting and introduced an efficient polynomial-time-approximation scheme (EPTAS) for unit-height rectangles @cite_6 . In a second step they experimentally evaluated heuristics, algorithms with approximation guarantees, and optimal approaches based on integer linear programming @cite_8 . A similar setting for rotating maps was considered by Yokosuka and Imai @cite_10 . Instead of ARO, they aimed at finding the maximum font size for which all labels can always be displayed without overlapping.
{ "cite_N": [ "@cite_10", "@cite_6", "@cite_8" ], "mid": [ "2748600169", "", "151632460" ], "abstract": [ "Map labeling is a problem of placing labels at corresponding graphical features on a map. There are two optimization problems: the label number maximization problem and the label size maximization problem. In general, both problems are NP-hard for static maps. Recently, the widespread use of several applications, such as personal mapping systems, has increased the importance of dynamic maps and the label number maximization problem for dynamic cases has been studied. In this paper, we consider the label size maximization problem for points on rotating maps. Our model is as follows. For each label, a point is chosen inside the label or on its boundary as an anchor point. Each label is placed such that the anchor point coincides with the corresponding point on the map. Furthermore, while the map fully rotates from 0 to 2π, the labels are placed horizontally according to the angle of the map. Our problem consists of finding the maximum scale factor for the labels such that the labels do not intersect, and deciding the place of the anchor points. We propose an O(n log n)-time and O(n)-space algorithm for the case where each anchor point is inside the label. Moreover, if the labels are of unit-height (or unit-width) and the anchor points are on the boundary, we also present an O(n log n)-time and O(n)-space algorithm.", "", "We consider the following problem of labeling points in a dynamic map that allows rotation. We are given a set of feature points in the plane labeled by a set of mutually disjoint labels, where each label is an axis-aligned rectangle attached with one corner to its respective point. We require that each label remains horizontally aligned during the map rotation, and our goal is to find a set of mutually nonoverlapping visible labels for every rotation angle α ∈ [0, 2π) so that the number of visible labels over a full map rotation of 2π is maximized. We discuss and experimentally evaluate several labeling strategies that define additional consistency constraints on label visibility to reduce flickering effects during monotone map rotation. We introduce three heuristic algorithms and compare them experimentally to an existing approximation algorithm and exact solutions obtained from an integer linear program. Our results show that on the one hand, low flickering can be achieved at the expense of only a small reduction in the objective value, and on the other hand, the proposed heuristics achieve a high labeling quality significantly faster than the other methods." ] }
1609.06327
2951091111
The increased availability of interactive maps on the Internet and on personal mobile devices has created new challenges in computational cartography and, in particular, for label placement in maps. Operations like rotation, zoom, and translation dynamically change the map over time and make a consistent adaptation of the map labeling necessary. In this paper, we consider map labeling for the case that a map undergoes a sequence of operations over a specified time span. We unify and generalize several preceding models for dynamic map labeling into one versatile and flexible model. In contrast to previous research, we completely abstract from the particular operations (e.g., zoom, rotation, etc.) and express the labeling problem as a set of time intervals representing the labels' presences, activities, and conflicts. The model's strength is manifested in its simplicity and broad range of applications. In particular, it supports label selection both for map features with fixed position as well as for moving entities (e.g., for tracking vehicles in logistics or air traffic control). Through extensive experiments on OpenStreetMap data, we evaluate our model using algorithms of varying complexity as a case study for navigation systems. Our experiments show that even simple (and thus, fast) algorithms achieve near-optimal solutions in our model with respect to an intuitive objective function.
Apart from the results based on the consistency model of @cite_12 , other approaches have been considered, too. @cite_2 described a view management system for interactive three-dimensional maps of cities also considering label placement. Mote @cite_11 presented a fast label placement strategy without a pre-processing phase. Luboschik @cite_0 described a fast particle-based strategy that locally optimizes the label placement. All these approaches have in common that they do not take the consistency criteria for dynamic map labeling into account.
{ "cite_N": [ "@cite_0", "@cite_11", "@cite_12", "@cite_2" ], "mid": [ "2115085978", "1983238329", "2138350120", "1868064561" ], "abstract": [ "In many information visualization techniques, labels are an essential part to communicate the visualized data. To preserve the expressiveness of the visual representation, a placed label should neither occlude other labels nor visual representatives (e.g., icons, lines) that communicate crucial information. Optimal, non-overlapping labeling is an NP-hard problem. Thus, only a few approaches achieve a fast non-overlapping labeling in highly interactive scenarios like information visualization. These approaches generally target the point-feature label placement (PFLP) problem, solving only label-label conflicts. This paper presents a new, fast, solid and flexible 2D labeling approach for the PFLP problem that additionally respects other visual elements and the visual extent of labeled features. The results (number of placed labels, processing time) of our particle-based method compare favorably to those of existing techniques. Although the esthetic quality of non-real-time approaches may not be achieved with our method, it complies with practical demands and thus supports the interactive exploration of information spaces. In contrast to the known adjacent techniques, the flexibility of our technique enables labeling of dense point clouds by the use of non-occluding distant labels. Our approach is independent of the underlying visualization technique, which enables us to demonstrate the application of our labeling method within different information visualization scenarios.", "This paper describes a fast approach to automatic point label de-confliction on interactive maps. The general Map Labeling problem is NP-hard and has been the subject of much study for decades. Computerized maps have introduced interactive zooming and panning, which has intensified the problem. Providing dynamic labels for such maps typically requires a time-consuming pre-processing phase. In the realm of visual analytics, however, the labeling of interactive maps is further complicated by the use of massive datasets laid out in arbitrary configurations, thus rendering reliance on a pre-processing phase untenable. This paper offers a method for labeling point-features on dynamic maps in real time without pre-processing. The algorithm presented is efficient, scalable, and exceptionally fast; it can label interactive charts and diagrams at speeds of multiple frames per second on maps with tens of thousands of nodes. To accomplish this, the algorithm employs a novel geometric de-confliction approach, the 'trellis strategy,' along with a unique label candidate cost analysis to determine the 'least expensive' label configuration. The speed and scalability of this approach make it well-suited for visual analytic applications.", "We address the problem of filtering, selecting and placing labels on a dynamic map, which is characterized by continuous zooming and panning capabilities. This consists of two interrelated issues. The first is to avoid label popping and other artifacts that cause confusion and interrupt navigation, and the second is to label at interactive speed. In most formulations the static map labeling problem is NP-hard, and a fast approximation might have O(n log n) complexity. Even this is too slow during interaction, when the number of labels shown can be several orders of magnitude less than the number in the map. In this paper we introduce a set of desiderata for \"consistent\" dynamic map labeling, which has qualities desirable for navigation. We develop a new framework for dynamic labeling that achieves the desiderata and allows for fast interactive display by moving all of the selection and placement decisions into the preprocessing phase. This framework is general enough to accommodate a variety of selection and placement algorithms. It does not appear possible to achieve our desiderata using previous frameworks. Prior to this paper, there were no formal models of dynamic maps or of dynamic labels; our paper introduces both. We formulate a general optimization problem for dynamic map labeling and give a solution to a simple version of the problem. The simple version is based on label priorities and a versatile and intuitive class of dynamic label placements we call \"invariant point placements\". Despite these restrictions, our approach gives a useful and practical solution. Our implementation is incorporated into the G-Vis system which is a full-detail dynamic map of the continental USA. This demo is available through any browser", "We present a dynamic placement technique for annotations of virtual landscapes that is based on efficient view management. Annotations represent textual or symbolic descriptions and provide explanatory or thematic information associated with spatial positions. The technique handles external annotations as 2.5 dimensional objects and adjusts their positions with respect to available space in the view-plane. The approach intends to place labels without occlusions and, if this cannot be achieved, favors those annotations that are close to the observer. This technique solves the visibility problem of annotations in an approximate but user-centric way. It operates in real-time and therefore can be applied to interactive virtual landscapes. Additionally, the approach can be configured to fine tune the trade off between placement quality and processing time with a single parameter." ] }
1609.06127
2521051712
Due to its wide use in personal, but most importantly, professional contexts, email represents a valuable source of information that can be harvested for understanding, reengineering and repurposing undocumented business processes of companies and institutions. Towards this aim, a few researchers investigated the problem of extracting process oriented information from email logs in order to take benefit of the many available process mining techniques and tools. In this paper we go further in this direction, by proposing a new method for mining process models from email logs that leverage unsupervised machine learning techniques with little human involvement. Moreover, our method allows to semi-automatically label emails with activity names, that can be used for activity recognition in new incoming emails. A use case demonstrates the usefulness of the proposed solution using a modest in size, yet real-world, dataset containing emails that belong to two different process models
The approach applied by @cite_3 exploits the relational structure between two problems: (i) extracting speech acts, (ii) finding related emails. Instead of attacking them separately, in their synergistic iterative approach, relations identification is used to assist semantic analysis, and vice versa.
{ "cite_N": [ "@cite_3" ], "mid": [ "344425797" ], "abstract": [ "Today’s email clients were designed for yesterday’s email. Originally, email was merely a communication medium. Today, people engage in a variety of complex behaviours using email, such as project management, collaboration, meeting scheduling, to-do tracking, etc. Our goal is to develop automated techniques to help people manage complex activities or tasks in email. The central challenge is that most activities are distributed over multiple messages, yet email clients allow users to manipulate just isolated messages. We describe machine learning approaches to identifying tasks and relations between individual messages in a task (i.e., finding cause-response links between emails) and for semantic message analysis (i.e., extracting metadata about how messages within a task relate to the task progress). Our key innovation compared to related work is that we exploit the relational structure of these two problems. Instead of attacking them separately, in our synergistic iterative approach, relations identification is used to assist semantic analysis, and vice versa. Our experiments with real-world email corpora demonstrate an improvement compared to nonrelational benchmarks." ] }
1609.06127
2521051712
Due to its wide use in personal, but most importantly, professional contexts, email represents a valuable source of information that can be harvested for understanding, reengineering and repurposing undocumented business processes of companies and institutions. Towards this aim, a few researchers investigated the problem of extracting process oriented information from email logs in order to take benefit of the many available process mining techniques and tools. In this paper we go further in this direction, by proposing a new method for mining process models from email logs that leverage unsupervised machine learning techniques with little human involvement. Moreover, our method allows to semi-automatically label emails with activity names, that can be used for activity recognition in new incoming emails. A use case demonstrates the usefulness of the proposed solution using a modest in size, yet real-world, dataset containing emails that belong to two different process models
SmartMail in Corston- @cite_0 identifies action items(tasks) in email messages. It produces summary for each email which can be added by the user to his her "to-do" list. In their approach, they need human annotators to provide tags to the training data set. Despite the extensive studies of speech act recognition in many areas, developing speech act recognition for emails is very challenging. A major challenge is that emails usually have no labeled data for training statistical speech acts recognizers.
{ "cite_N": [ "@cite_0" ], "mid": [ "1587199587" ], "abstract": [ "We describe SmartMail, a prototype system for automatically identifying action items (tasks) in email messages. SmartMail presents the user with a task-focused summary of a message. The summary consists of a list of action items extracted from the message. The user can add these action items to their “to do” list." ] }
1609.06127
2521051712
Due to its wide use in personal, but most importantly, professional contexts, email represents a valuable source of information that can be harvested for understanding, reengineering and repurposing undocumented business processes of companies and institutions. Towards this aim, a few researchers investigated the problem of extracting process oriented information from email logs in order to take benefit of the many available process mining techniques and tools. In this paper we go further in this direction, by proposing a new method for mining process models from email logs that leverage unsupervised machine learning techniques with little human involvement. Moreover, our method allows to semi-automatically label emails with activity names, that can be used for activity recognition in new incoming emails. A use case demonstrates the usefulness of the proposed solution using a modest in size, yet real-world, dataset containing emails that belong to two different process models
The work in @cite_12 focuses on the problem of how to accurately recognize speech acts in emails by making maximum use of data from existing resources. They contribute in learning speech acts in a semi-supervised way by making use of some labeled data from spoken conversations. In their work, subtree features are exploited by using a subtree pattern mining. More precisely, they consider the text as a forest containing multiple trees. Each tree represents the relationships between several words (parent-child relationship). Dependency subtrees (speech acts) are then extracted from the trees forest. This is done by calling the models trained on data from existing external corpora and making use of it to extract the speech acts from the available emails.
{ "cite_N": [ "@cite_12" ], "mid": [ "2089285937" ], "abstract": [ "In this paper, we present a semi-supervised method for automatic speech act recognition in email and forums. The major challenge of this task is due to lack of labeled data in these two genres. Our method leverages labeled data in the Switchboard-DAMSL and the Meeting Recorder Dialog Act database and applies simple domain adaptation techniques over a large amount of unlabeled email and forum data to address this problem. Our method uses automatically extracted features such as phrases and dependency trees, called subtree features, for semi-supervised learning. Empirical results demonstrate that our model is effective in email and forum speech act recognition." ] }
1609.06127
2521051712
Due to its wide use in personal, but most importantly, professional contexts, email represents a valuable source of information that can be harvested for understanding, reengineering and repurposing undocumented business processes of companies and institutions. Towards this aim, a few researchers investigated the problem of extracting process oriented information from email logs in order to take benefit of the many available process mining techniques and tools. In this paper we go further in this direction, by proposing a new method for mining process models from email logs that leverage unsupervised machine learning techniques with little human involvement. Moreover, our method allows to semi-automatically label emails with activity names, that can be used for activity recognition in new incoming emails. A use case demonstrates the usefulness of the proposed solution using a modest in size, yet real-world, dataset containing emails that belong to two different process models
Another category of works deal with the problem of conversation detection in email systems (see for exemple @cite_6 ). While conversation detection is closed to the problem of process instance discovery, they are also different: a conversation is defined as taking place among the same group of people, while a process instance involves different persons, each one having a limited view on the overall set of exchanged emails (i.e., the travel agent only books the tickets, is not implied in the other exchanges).
{ "cite_N": [ "@cite_6" ], "mid": [ "1749480759" ], "abstract": [ "This work explores a novel approach for conversation detection in email mailboxes. This approach clusters messages into coherent conversations by using a similarity function among messages that takes into consideration all relevant email attributes, such as message subject, participants, date of submission, and message content. The detection algorithm is evaluated against a manual partition of two email mailboxes into conversations. Experimental results demonstrate the superiority of our detection algorithm over several other alternative approaches." ] }
1609.05672
2560215812
In this article, we take one step toward understanding the learning behavior of deep residual networks, and supporting the observation that deep residual networks behave like ensembles. We propose a new convolutional neural network architecture which builds upon the success of residual networks by explicitly exploiting the interpretation of very deep networks as an ensemble. The proposed multi-residual network increases the number of residual functions in the residual blocks. Our architecture generates models that are wider, rather than deeper, which significantly improves accuracy. We show that our model achieves an error rate of 3.73 and 19.45 on CIFAR-10 and CIFAR-100 respectively, that outperforms almost all of the existing models. We also demonstrate that our model outperforms very deep residual networks by 0.22 (top-1 error) on the full ImageNet 2012 classification dataset. Additionally, inspired by the parallel structure of multi-residual networks, a model parallelism technique has been investigated. The model parallelism method distributes the computation of residual blocks among the processors, yielding up to 15 computational complexity improvement.
A residual block consists of a residual function @math , and an identity skip-connection (see Figure ), where @math contains convolution, activation (ReLU) and batch normalization @cite_35 layers in a specific order. In the most recent residual network the order is normalization-ReLU-convolution which is known as the pre-activation model @cite_13 .
{ "cite_N": [ "@cite_35", "@cite_13" ], "mid": [ "2949117887", "2949427019" ], "abstract": [ "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.", "Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: this https URL" ] }
1609.05672
2560215812
In this article, we take one step toward understanding the learning behavior of deep residual networks, and supporting the observation that deep residual networks behave like ensembles. We propose a new convolutional neural network architecture which builds upon the success of residual networks by explicitly exploiting the interpretation of very deep networks as an ensemble. The proposed multi-residual network increases the number of residual functions in the residual blocks. Our architecture generates models that are wider, rather than deeper, which significantly improves accuracy. We show that our model achieves an error rate of 3.73 and 19.45 on CIFAR-10 and CIFAR-100 respectively, that outperforms almost all of the existing models. We also demonstrate that our model outperforms very deep residual networks by 0.22 (top-1 error) on the full ImageNet 2012 classification dataset. Additionally, inspired by the parallel structure of multi-residual networks, a model parallelism technique has been investigated. The model parallelism method distributes the computation of residual blocks among the processors, yielding up to 15 computational complexity improvement.
Deep residual networks contain many stacked residual blocks with @math , where @math and @math are the input and output of the block. Moreover, a deep residual network with the identity skip-connections @cite_13 can be represented as:
{ "cite_N": [ "@cite_13" ], "mid": [ "2949427019" ], "abstract": [ "Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: this https URL" ] }
1609.05672
2560215812
In this article, we take one step toward understanding the learning behavior of deep residual networks, and supporting the observation that deep residual networks behave like ensembles. We propose a new convolutional neural network architecture which builds upon the success of residual networks by explicitly exploiting the interpretation of very deep networks as an ensemble. The proposed multi-residual network increases the number of residual functions in the residual blocks. Our architecture generates models that are wider, rather than deeper, which significantly improves accuracy. We show that our model achieves an error rate of 3.73 and 19.45 on CIFAR-10 and CIFAR-100 respectively, that outperforms almost all of the existing models. We also demonstrate that our model outperforms very deep residual networks by 0.22 (top-1 error) on the full ImageNet 2012 classification dataset. Additionally, inspired by the parallel structure of multi-residual networks, a model parallelism technique has been investigated. The model parallelism method distributes the computation of residual blocks among the processors, yielding up to 15 computational complexity improvement.
Residual networks with stochastic depth @cite_19 use Bernoulli random variables to randomly disable the residual blocks during the training phase. This results in a shallower network at the training phase, while having a deeper network at the test phase. Deep residual networks with stochastic depth improve the accuracy of deep residual networks with constant depth. This is because of the reduction in the network depth which strengthens the back-propagated gradients of the earlier layers, and because of ensembling networks of different depths.
{ "cite_N": [ "@cite_19" ], "mid": [ "2949892913" ], "abstract": [ "Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91 on CIFAR-10)." ] }
1609.05672
2560215812
In this article, we take one step toward understanding the learning behavior of deep residual networks, and supporting the observation that deep residual networks behave like ensembles. We propose a new convolutional neural network architecture which builds upon the success of residual networks by explicitly exploiting the interpretation of very deep networks as an ensemble. The proposed multi-residual network increases the number of residual functions in the residual blocks. Our architecture generates models that are wider, rather than deeper, which significantly improves accuracy. We show that our model achieves an error rate of 3.73 and 19.45 on CIFAR-10 and CIFAR-100 respectively, that outperforms almost all of the existing models. We also demonstrate that our model outperforms very deep residual networks by 0.22 (top-1 error) on the full ImageNet 2012 classification dataset. Additionally, inspired by the parallel structure of multi-residual networks, a model parallelism technique has been investigated. The model parallelism method distributes the computation of residual blocks among the processors, yielding up to 15 computational complexity improvement.
Swapout @cite_24 generalizes dropout @cite_36 and networks with stochastic depth @cite_19 using @math , where @math and @math are two Bernoulli random variables. Swapout has the ability to sample from four network architectures @math , therefore having a larger domain for ensembles. Wide residual networks @cite_22 increase the number of convolutional filters, and are able to yield a better performance than the original residual networks. This suggests that the power of residual networks originate in the residual connections, as opposed to extremely increasing the network depth. DenseNet @cite_10 uses a dense connection pattern among the convolutional layers, where each layer is directly connected to all preceding layers.
{ "cite_N": [ "@cite_22", "@cite_36", "@cite_24", "@cite_19", "@cite_10" ], "mid": [ "2401231614", "2095705004", "2397299141", "2949892913", "2511730936" ], "abstract": [ "Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layer-deep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at this https URL", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "We describe Swapout, a new stochastic training method, that outperforms ResNets of identical network structure yielding impressive results on CIFAR-10 and CIFAR-100. Swapout samples from a rich set of architectures including dropout, stochastic depth and residual architectures as special cases. When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, similar to dropout, but also across network layers. We conjecture that swapout achieves strong regularization by implicitly tying the parameters across layers. When viewed as an ensemble training method, it samples a much richer set of architectures than existing methods such as dropout or stochastic depth. We propose a parameterization that reveals connections to exiting architectures and suggests a much richer set of architectures to be explored. We show that our formulation suggests an efficient training method and validate our conclusions on CIFAR-10 and CIFAR-100 matching state of the art accuracy. Remarkably, our 32 layer wider model performs similar to a 1001 layer ResNet model.", "Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91 on CIFAR-10).", "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL ." ] }
1609.05834
2522924858
Scene understanding is one of the essential and challenging topics in computer vision and photogrammetry. Scene graph provides valuable information for such scene understanding. This paper proposes a novel framework for automatic generation of semantic scene graphs which interpret indoor environments. First, a Convolutional Neural Network is used to detect objects of interest in the given image. Then, the precise support relations between objects are inferred by taking two important auxiliary information in the indoor environments: the physical stability and the prior support knowledge between object categories. Finally, a semantic scene graph describing the contextual relations within a cluttered indoor scene is constructed. In contrast to the previous methods for extracting support relations, our approach provides more accurate results. Furthermore, we do not use pixel-wise segmentation to obtain objects, which is computation costly. We also propose different methods to evaluate the generated scene graphs, which lacks in this community. Our experiments are carried out on the NYUv2 dataset. The experimental results demonstrated that our approach outperforms the state-of-the-art methods in inferring support relations. The estimated scene graphs are accurately compared with ground truth.
@cite_12 proposed to use scene graphs as queries to retrieve semantically related images. Their scene graphs are manually generated by the Amazon Mechanical Turk, which literally is expensive. Prabhu and Venkatesh @cite_5 constructed scene graphs to represent the semantic characteristics of an image, and used it for image ranking by graph matching. Their approach works on high-quality images with few objects. @cite_8 proposed to use scene graphs for video search. Their semantic graphs are generated from text queries using manually-defined rules to transform parse trees, similar as @cite_24 . Using a grammar, @cite_6 proposed to learn scene graphs from ground truth graphs of synthetic data. Then they parsed through a pre-defined segmentation of a synthetic scene so as to create a graph that matches the learned structure. None of these works objectively assess the quality of scene graph hypotheses compared with ground truth graphs. However, reasonable measures for this problem are important especially after the publication of the dataset @cite_17 .
{ "cite_N": [ "@cite_8", "@cite_6", "@cite_24", "@cite_5", "@cite_12", "@cite_17" ], "mid": [ "2086842362", "1992971572", "2250378130", "1936361992", "2077069816", "2949474740" ], "abstract": [ "In this paper, we tackle the problem of retrieving videos using complex natural language queries. Towards this goal, we first parse the sentential descriptions into a semantic graph, which is then matched to visual concepts using a generalized bipartite matching algorithm. Our approach exploits object appearance, motion and spatial relations, and learns the importance of each term using structure prediction. We demonstrate the effectiveness of our approach on a new dataset designed for semantic search in the context of autonomous driving, which exhibits complex and highly dynamic scenes with many objects. We show that our approach is able to locate a major portion of the objects described in the query with high accuracy, and improve the relevance in video retrieval.", "Growing numbers of 3D scenes in online repositories provide new opportunities for data-driven scene understanding, editing, and synthesis. Despite the plethora of data now available online, most of it cannot be effectively used for data-driven applications because it lacks consistent segmentations, category labels, and or functional groupings required for co-analysis. In this paper, we develop algorithms that infer such information via parsing with a probabilistic grammar learned from examples. First, given a collection of scene graphs with consistent hierarchies and labels, we train a probabilistic hierarchical grammar to represent the distributions of shapes, cardinalities, and spatial relationships of semantic objects within the collection. Then, we use the learned grammar to parse new scenes to assign them segmentations, labels, and hierarchies consistent with the collection. During experiments with these algorithms, we find that: they work effectively for scene graphs for indoor scenes commonly found online (bedrooms, classrooms, and libraries); they outperform alternative approaches that consider only shape similarities and or spatial relationships without hierarchy; they require relatively small sets of training data; they are robust to moderate over-segmentation in the inputs; and, they can robustly transfer labels from one data set to another. As a result, the proposed algorithms can be used to provide consistent hierarchies for large collections of scenes within the same semantic class.", "Semantically complex queries which include attributes of objects and relations between objects still pose a major challenge to image retrieval systems. Recent work in computer vision has shown that a graph-based semantic representation called a scene graph is an effective representation for very detailed image descriptions and for complex queries for retrieval. In this paper, we show that scene graphs can be effectively created automatically from a natural language scene description. We present a rule-based and a classifierbased scene graph parser whose output can be used for image retrieval. We show that including relations and attributes in the query graph outperforms a model that only considers objects and that using the output of our parsers is almost as effective as using human-constructed scene graphs (Recall@10 of 27.1 vs. 33.4 ). Additionally, we demonstrate the general usefulness of parsing to scene graphs by showing that the output can also be used to generate 3D scenes.", "We propose a novel image representation, termed Attribute-Graph, to rank images by their semantic similarity to a given query image. An Attribute-Graph is an undirected fully connected graph, incorporating both local and global image characteristics. The graph nodes characterise objects as well as the overall scene context using mid-level semantic attributes, while the edges capture the object topology. We demonstrate the effectiveness of Attribute-Graphs by applying them to the problem of image ranking. We benchmark the performance of our algorithm on the 'rPascal' and 'rImageNet' datasets, which we have created in order to evaluate the ranking performance on complex queries containing multiple objects. Our experimental evaluation shows that modelling images as Attribute-Graphs results in improved ranking performance over existing techniques.", "This paper develops a novel framework for semantic image retrieval based on the notion of a scene graph. Our scene graphs represent objects (“man”, “boat”), attributes of objects (“boat is white”) and relationships between objects (“man standing on boat”). We use these scene graphs as queries to retrieve semantically related images. To this end, we design a conditional random field model that reasons about possible groundings of scene graphs to test images. The likelihoods of these groundings are used as ranking scores for retrieval. We introduce a novel dataset of 5,000 human-generated scene graphs grounded to images and use this dataset to evaluate our method for image retrieval. In particular, we evaluate retrieval using full scene graphs and small scene subgraphs, and show that our method outperforms retrieval methods that use only objects or low-level image features. In addition, we show that our full model can be used to improve object localization compared to baseline methods.", "Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) in order to answer correctly that \"the person is riding a horse-drawn carriage\". In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 100K images where each image has an average of 21 objects, 18 attributes, and 18 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answers." ] }
1609.05834
2522924858
Scene understanding is one of the essential and challenging topics in computer vision and photogrammetry. Scene graph provides valuable information for such scene understanding. This paper proposes a novel framework for automatic generation of semantic scene graphs which interpret indoor environments. First, a Convolutional Neural Network is used to detect objects of interest in the given image. Then, the precise support relations between objects are inferred by taking two important auxiliary information in the indoor environments: the physical stability and the prior support knowledge between object categories. Finally, a semantic scene graph describing the contextual relations within a cluttered indoor scene is constructed. In contrast to the previous methods for extracting support relations, our approach provides more accurate results. Furthermore, we do not use pixel-wise segmentation to obtain objects, which is computation costly. We also propose different methods to evaluate the generated scene graphs, which lacks in this community. Our experiments are carried out on the NYUv2 dataset. The experimental results demonstrated that our approach outperforms the state-of-the-art methods in inferring support relations. The estimated scene graphs are accurately compared with ground truth.
Physical relations between objects to help image or scene understanding have been investigated in @cite_3 @cite_2 @cite_9 @cite_23 , and @cite_15 . Pixel-wise segmentation and 3D volumetric estimation are two major methods for this task. @cite_3 @cite_2 used pixel-wise segmentations to analyze support relations in challenging cluttered indoor scenes. They both ignored the contextual knowledge provided by the scene. @cite_3 ignored small objects and the physical constraints while @cite_2 set up simple physical constraints. A typical examples of 3D cuboid based method is @cite_9 . estimated the 3D cuboids to capture spatial information of each object using RGBD data and then reason about their stability. However, stability and support relations are inferred in tiny images with few objects. For the part of support relations inference in this paper is mostly related to @cite_3 @cite_2 . However, we integrate physical constraints and prior support knowledge between object classes into our approach for extracting more accurate support relations. Furthermore, we do not operate pixelwise segmentation for object extraction. Finally, our framework generates a semantic graph to interpret the given image. Objective measures for accessing the quality of constructed graphs are proposed.
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_23", "@cite_2", "@cite_15" ], "mid": [ "2065476635", "125693051", "2032541004", "2048685479", "1921440304" ], "abstract": [ "3D volumetric reasoning is important for truly understanding a scene. Humans are able to both segment each object in an image, and perceive a rich 3D interpretation of the scene, e.g., the space an object occupies, which objects support other objects, and which objects would, if moved, cause other objects to fall. We propose a new approach for parsing RGB-D images using 3D block units for volumetric reasoning. The algorithm fits image segments with 3D blocks, and iteratively evaluates the scene based on block interaction properties. We produce a 3D representation of the scene based on jointly optimizing over segmentations, block fitting, supporting relations, and object stability. Our algorithm incorporates the intuition that a good 3D representation of the scene is the one that fits the data well, and is a stable, self-supporting (i.e., one that does not topple) arrangement of objects. We experiment on several datasets including controlled and real indoor scenarios. Results show that our stability-reasoning framework improves RGB-D segmentation and scene volumetric representation.", "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.", "This paper presents a new perspective for 3D scene understanding by reasoning object stability and safety using intuitive mechanics. Our approach utilizes a simple observation that, by human design, objects in static scenes should be stable in the gravity field and be safe with respect to various physical disturbances such as human activities. This assumption is applicable to all scene categories and poses useful constraints for the plausible interpretations (parses) in scene understanding. Given a 3D point cloud captured for a static scene by depth cameras, our method consists of three steps: (i) recovering solid 3D volumetric primitives from voxels; (ii) reasoning stability by grouping the unstable primitives to physically stable objects by optimizing the stability and the scene prior; and (iii) reasoning safety by evaluating the physical risks for objects under physical disturbances, such as human activity, wind or earthquakes. We adopt a novel intuitive physics model and represent the energy landscape of each primitive and object in the scene by a disconnectivity graph (DG). We construct a contact graph with nodes being 3D volumetric primitives and edges representing the supporting relations. Then we adopt a Swendson---Wang Cuts algorithm to partition the contact graph into groups, each of which is a stable object. In order to detect unsafe objects in a static scene, our method further infers hidden and situated causes (disturbances) in the scene, and then introduces intuitive physical mechanics to predict possible effects (e.g., falls) as consequences of the disturbances. In experiments, we demonstrate that the algorithm achieves a substantially better performance for (i) object segmentation, (ii) 3D volumetric recovery, and (iii) scene understanding with respect to other state-of-the-art methods. We also compare the safety prediction from the intuitive mechanics model with human judgement.", "To extract reasonable support relations from \"RGB+depth\" (RGBD) images, it is very important to achieve good scene understanding. This paper proposes a novel approach to extracting accurate support relationships by analyzing the RGBD images of indoor scenes. Noting that the support relations and structure classes of indoor images are inherently related to physical stability, we construct an improved energy function that embodies this stability. We then infer the support relations and structure classes from indoor RGBD images by minimizing this energy function. Moreover, the authors succeed in improving the segmentation quality of RGBD images using the inferred results as input. Compared with previous methods, our approach produces more reasonable support relations and structure classes, where physical stability function is taken into account for resolving the optimization problem. We use the NYU-Depth2 dataset as the training data, and experimental results show that the proposed RGBD image segmentation method based on support relation abstraction produces more accurate results than segmentation methods based on ground-truth support relations.", "RGBD images with high quality annotations, both in the form of geometric i.e., segmentation and structural i.e., how do the segments mutually relate in 3D information, provide valuable priors for a diverse range of applications in scene understanding and image manipulation. While it is now simple to acquire RGBD images, annotating them, automatically or manually, remains challenging. We present SmartAnnotator, an interactive system to facilitate annotating raw RGBD images. The system performs the tedious tasks of grouping pixels, creating potential abstracted cuboids, inferring object interactions in 3D, and generates an ordered list of hypotheses. The user simply has to flip through the suggestions for segment labels, finalize a selection, and the system updates the remaining hypotheses. As annotations are finalized, the process becomes simpler with fewer ambiguities to resolve. Moreover, as more scenes are annotated, the system makes better suggestions based on the structural and geometric priors learned from previous annotation sessions. We test the system on a large number of indoor scenes across different users and experimental settings, validate the results on existing benchmark datasets, and report significant improvements over low-level annotation alternatives. Code and benchmark datasets are publicly available on the project page." ] }
1609.05396
2950997711
Multimodal registration is a challenging problem in medical imaging due the high variability of tissue appearance under different imaging modalities. The crucial component here is the choice of the right similarity measure. We make a step towards a general learning-based solution that can be adapted to specific situations and present a metric based on a convolutional neural network. Our network can be trained from scratch even from a few aligned image pairs. The metric is validated on intersubject deformable registration on a dataset different from the one used for training, demonstrating good generalization. In this task, we outperform mutual information by a significant margin.
The idea of using supervised learning to build a similarity metric for multimodal images has been explored in a number of works. On one side, there are probabilistic approaches which rely on modelling the joint-image distribution. For instance, Guetter al propose a generative method based on Kullback-Leibler Divergence @cite_3 . Our work is closer to the discriminative concept proposed by Lee al @cite_8 and Michel al @cite_6 , where the problem of learning a similarity metric is posed as binary classification. Here the goal is to discriminate between aligned and misaligned patches given pairs of aligned images. Lee al propose the use of a Structured Support Vector Machine while Michel al use a method based on Adaboost. Different to these approaches we rely on CNN as our learning method of choice as the suitable set of characteristics for each type of modality combinations can be directly learned from the training data.
{ "cite_N": [ "@cite_6", "@cite_3", "@cite_8" ], "mid": [ "2098920110", "1508195517", "2125822693" ], "abstract": [ "Defining a suitable metric is one of the biggest challenges in deformable image fusion from different modalities. In this paper, we propose a novel approach for multi-modal metric learning in the deformable registration framework that consists of embedding data from both modalities into a common metric space whose metric is used to parametrize the similarity. Specifically, we use image representation in the Fourier Gabor space which introduces invariance to the local pose parameters, and the Hamming metric as the target embedding space, which allows constructing the embedding using boosted learning algorithms. The resulting metric is incorporated into a discrete optimization framework. Very promising results demonstrate the potential of the proposed method.", "The need for non-rigid multi-modal registration is becoming increasingly common for many clinical applications. To date, however, existing proposed techniques remain as largely academic research effort with very few methods being validated for clinical product use. It has been suggested by [1] that the context-free nature of these methods is one of the main limitations and that moving towards context-specific methods by incorporating prior knowledge of the underlying registration problem is necessary to achieve registration results that are accurate and robust enough for clinical applications. In this paper, we propose a novel non-rigid multi-modal registration method using a variational formulation that incorporates a prior learned joint intensity distribution. The registration is achieved by simultaneously minimizing the Kullback-Leibler divergence between an observed and a learned joint intensity distribution and maximizing the mutual information between reference and alignment images. We have applied our proposed method on both synthetic and real images with encouraging results.", "Multi-modal image registration is a challenging problem in medical imaging. The goal is to align anatomically identical structures; however, their appearance in images acquired with different imaging devices, such as CT or MR, may be very different. Registration algorithms generally deform one image, the floating image, such that it matches with a second, the reference image, by maximizing some similarity score between the deformed and the reference image. Instead of using a universal, but a priori fixed similarity criterion such as mutual information, we propose learning a similarity measure in a discriminative manner such that the reference and correctly deformed floating images receive high similarity scores. To this end, we develop an algorithm derived from max-margin structured output learning, and employ the learned similarity measure within a standard rigid registration algorithm. Compared to other approaches, our method adapts to the specific registration problem at hand and exploits correlations between neighboring pixels in the reference and the floating image. Empirical evaluation on CT-MR PET-MR rigid registration tasks demonstrates that our approach yields robust performance and outperforms the state of the art methods for multi-modal medical image registration." ] }
1609.05396
2950997711
Multimodal registration is a challenging problem in medical imaging due the high variability of tissue appearance under different imaging modalities. The crucial component here is the choice of the right similarity measure. We make a step towards a general learning-based solution that can be adapted to specific situations and present a metric based on a convolutional neural network. Our network can be trained from scratch even from a few aligned image pairs. The metric is validated on intersubject deformable registration on a dataset different from the one used for training, demonstrating good generalization. In this task, we outperform mutual information by a significant margin.
The power of CNNs to capture complex relationships between multimodal medical images has been shown in the problem of modality synthesis @cite_7 , where CNNs are used to map MRI-T2 images to MRI-T1 images using jointly the appearance of a small patch together with its localization. Our work is arguably most similar to the approach of Cheng al @cite_9 who train a multilayer fully-connected network pretrained with autoencoder for estimating similarity of 2D CT-MR patch pairs. Our network is a CNN, which enables us to scale to 3D due to weight sharing and train from scratch. Moreover, we evaluate our metric on the actual task of registration, unlike Cheng al
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2334328509", "2397287600" ], "abstract": [ "The present embodiments relate to machine learning for multimodal image data. By way of introduction, the present embodiments described below include apparatuses and methods for learning a similarity metric using deep learning based techniques for multimodal medical images. A novel similarity metric for multi-modal images is provided using the corresponding states of pairs of image patches to generate a classification setting for each pair. The classification settings are used to train a deep neural network via supervised learning. A multi-modal stacked denoising auto encoder (SDAE) is used to pre-train the neural network. A continuous and smooth similarity metric is constructed based on the output of the neural network before activation in the last layer. The trained similarity metric may be used to improve the results of image fusion.", "Cross-modality image synthesis has recently gained significant interest in the medical imaging community. In this paper, we propose a novel architecture called location-sensitive deep network LSDN for synthesizing images across domains. Our network integrates intensity feature from image voxels and spatial information in a principled manner. Specifically, LSDN models hidden nodes as products of features and spatial responses. We then propose a novel method, called ShrinkConnect, for reducing the computations of LSDN without sacrificing synthesis accuracy. ShrinkConnect enforces simultaneous sparsity to find a compact set of functions that accurately approximates the responses of all hidden nodes. Experimental results demonstrate that LSDN+ShrinkConnect outperforms the state of the art in cross-domain synthesis of MRI brain scans by a significant margin. Our approach is also computationally efficient, e.g. 26× faster than other sparse representation based methods." ] }
1609.05294
2521957343
We are interested in exploring the possibility and benefits of structure learning for deep models. As the first step, this paper investigates the matter for Restricted Boltzmann Machines (RBMs). We conduct the study with Replicated Softmax, a variant of RBMs for unsupervised text analysis. We present a method for learning what we call Sparse Boltzmann Machines, where each hidden unit is connected to a subset of the visible units instead of all of them. Empirical results show that the method yields models with significantly improved model fit and interpretability as compared with RBMs where each hidden unit is connected to all visible units.
Network pruning is also a potential way to optimize the structure of a neural network. Biased weight decay was the early approach to pruning. Later, Optimal Brain Damage and Optimal Brain Surgeon suggested that magnitude-based pruning may not be the best strategies and they proposed pruning methods based on the Hessian of the loss function. With respect to deep neural networks, @cite_2 proposed to compress a network through a three-step process: train, prune connections, and retrain. We call it redundancy pruning. In contrast, @cite_1 proposed to prune redundant neurons directly. They all reduced the number of parameters vastly with slight or even no performance loss. The drawback of network pruning is that the original networks should be large enough and hence some computation would be wasted on those unnecessary parameters during pre-training.
{ "cite_N": [ "@cite_1", "@cite_2" ], "mid": [ "992687842", "2963674932" ], "abstract": [ "Deep Neural nets (NNs) with millions of parameters are at the heart of many state-of-the-art computer vision systems today. However, recent works have shown that much smaller models can achieve similar levels of performance. In this work, we address the problem of pruning parameters in a trained NN model. Instead of removing individual weights one at a time as done in previous works, we remove one neuron at a time. We show how similar neurons are redundant, and propose a systematic way to remove them. Our experiments in pruning the densely connected layers show that we can remove upto 85 of the total parameters in an MNIST-trained network, and about 35 for AlexNet without significantly affecting performance. Our method can be applied on top of most networks with a fully connected layer to give a smaller network.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy." ] }
1609.05268
2949411531
Parallel coordinate plots (PCPs) are among the most useful techniques for the visualization and exploration of high-dimensional data spaces. They are especially useful for the representation of correlations among the dimensions, which identify relationships and interdependencies between variables. However, within these high-dimensional spaces, PCPs face difficulties in displaying the correlation between combinations of dimensions and generally require additional display space as the number of dimensions increases. In this paper, we present a new technique for high-dimensional data visualization in which a set of low-dimensional PCPs are interactively constructed by sampling user-selected subsets of the high-dimensional data space. In our technique, we first construct a graph visualization of sets of well-correlated dimensions. Users observe this graph and are able to interactively select the dimensions by sampling from its cliques, thereby dynamically specifying the most relevant lower dimensional data to be used for the construction of focused PCPs. Our interactive sampling overcomes the shortcomings of the PCPs by enabling the visualization of the most meaningful dimensions (i.e., the most relevant information) from high-dimensional spaces. We demonstrate the effectiveness of our technique through two case studies, where we show that the proposed interactive low-dimensional space constructions were pivotal for visualizing the high-dimensional data and discovering new patterns.
As mentioned previously, PCPs @cite_31 display high-dimensional datasets as polylines intersecting with parallel axes. The improvement of PCPs is a very active research topic and one of the well-known challenges in this domain is that of polyline cluttering, i.e., a reduction of line crossings and overlaps for visual comprehensibility. Several techniques have attempted to improve the comprehensibility of the results obtained by PCPs by applying clustering or sampling of the polylines @cite_23 @cite_25 @cite_0 @cite_5 . In addition, the effectiveness of PCPs are highly dependent on the order of the dimensions and various dimension ordering techniques have recently been proposed to address this issue @cite_14 @cite_35 @cite_42 @cite_10 . The last major challenge is the difficulty in representing all correlations in one display space, especially when a particular dimension is strongly correlated with many other dimensions. In these circumstances, PCPs can represent only a subset of all possible relationships between the dimensions.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_42", "@cite_0", "@cite_23", "@cite_5", "@cite_31", "@cite_10", "@cite_25" ], "mid": [ "2291534001", "2093219811", "2163925089", "", "2145646037", "", "2034694694", "2066560342", "2166947480" ], "abstract": [ "We introduce the parallel coordinates matrix (PCM) as the counterpart to the scatterplot matrix (SPLOM). Using a graph-theoretic approach, we determine a list of axis orderings such that all pairwise relations can be displayed without redundancy while each parallel-coordinates plot can be used independently to visualize all variables of the dataset. Therefore, existing axis-ordering algorithms, rendering techniques, and interaction methods can easily be applied to the individual parallel-coordinates plots. We demonstrate the value of the PCM in two case studies and show how it can serve as an overview visualization for parallel coordinates. Finally, we apply existing focus-and-context techniques in an interactive setup to support a detailed analysis of multivariate data.", "High-dimensional data visualization is receiving increasing interest because of the growing abundance of highdimensional datasets. To understand such datasets, visualization of the structures present in the data, such as clusters, can be an invaluable tool. Structures may be present in the full high-dimensional space, as well as in its subspaces. Two widely used methods to visualize high-dimensional data are the scatter plot matrix (SPM) and the parallel coordinate plot (PCP). SPM allows a quick overview of the structures present in pairwise combinations of dimensions. On the other hand, PCP has the potential to visualize not only bi-dimensional structures but also higher dimensional ones. A problem with SPM is that it suffers from crowding and clutter which makes interpretation hard. Approaches to reduce clutter are available in the literature, based on changing the order of the dimensions. However, usually this reordering has a high computational complexity. For effective visualization of high-dimensional structures, also PCP requires a proper ordering of the dimensions. In this paper, we propose methods for reordering dimensions in PCP in such a way that high-dimensional structures (if present) become easier to perceive. We also present a method for dimension reordering in SPM which yields results that are comparable to those of existing approaches, but at a much lower computational cost. Our approach is based on finding relevant subspaces for clustering using a quality criterion and cluster information. The quality computation and cluster detection are done in image space, using connected morphological operators. We demonstrate the potential of our approach for synthetic and astronomical datasets, and show that our method compares favorably with a number of existing approaches.", "Visual clutter denotes a disordered collection of graphical entities in information visualization. Clutter can obscure the structure present in the data. Even in a small dataset, clutter can make it hard for the viewer to find patterns, relationships and structure. In this paper, we define visual clutter as any aspect of the visualization that interferes with the viewer's understanding of the data, and present the concept of clutter-based dimension reordering. Dimension order is an attribute that can significantly affect a visualization's expressiveness. By varying the dimension order in a display, it is possible to reduce clutter without reducing information content or modifying the data in any way. Clutter reduction is a display-dependent task. In this paper, we follow a three-step procedure for four different visualization techniques. For each display technique, first, we determine what constitutes clutter in terms of display properties; then we design a metric to measure visual clutter in this display; finally we search for an order that minimizes the clutter in a display", "", "Our ability to accumulate large, complex (multivariate) data sets has far exceeded our ability to effectively process them in searching for patterns, anomalies and other interesting features. Conventional multivariate visualization techniques generally do not scale well with respect to the size of the data set. The focus of this paper is on the interactive visualization of large multivariate data sets based on a number of novel extensions to the parallel coordinates display technique. We develop a multi-resolution view of the data via hierarchical clustering, and use a variation of parallel coordinates to convey aggregation information for the resulting clusters. Users can then navigate the resulting structure until the desired focus region and level of detail is reached, using our suite of navigational and filtering tools. We describe the design and implementation of our hierarchical parallel coordinates system which is based on extending the XmdvTool system. Lastly, we show examples of the tools and techniques applied to large (hundreds of thousands of records) multivariate data sets.", "", "A methodology for visualizing analytic and synthetic geometry in RN is presented. It is based on a system of parallel coordinates which induces a non-projective mapping between N-Dimensional and 2-Dimensional sets. Hypersurfaces are represented by their planar images which have some geometrical properties analogous to the properties of the hypersurface that they represent. A point ← → line duality when N = 2 generalizes to lines and hyperplanes enabling the representation of polyhedra in RN. The representation of a class of convex and non-convex hypersurfaces is discussed together with an algorithm for constructing and displaying any interior point. The display shows some local properties of the hypersurface and provides information on the point's proximity to the boundary. Applications to Air Traffic Control, Robotics, Computer Vision, Computational Geometry, Statistics, Instrumentation and other areas are discussed.", "The navigation of high-dimensional data spaces remains challenging, making multivariate data exploration difficult. To be effective and appealing for mainstream application, navigation should use paradigms and metaphors that users are already familiar with. One such intuitive navigation paradigm is interactive route planning on a connected network. We have employed such an interface and have paired it with a prominent high-dimensional visualization paradigm showing the N-D data in undistorted raw form: parallel coordinates. In our network interface, the dimensions form nodes that are connected by a network of edges representing the strength of association between dimensions. A user then interactively specifies nodes edges to visit, and the system computes an optimal route, which can be further edited and manipulated. In our interface, this route is captured by a parallel coordinate data display in which the dimension ordering is configured by the specified route. Our framework serves both as a data exploration environment and as an interactive presentation platform to demonstrate, explain, and justify any identified relationships to others. We demonstrate our interface within a business scenario and other applications.", "In order to gain insight into multivariate data, complex structures must be analysed and understood. Parallel coordinates is an excellent tool for visualizing this type of data but has its limitations. This paper deals with one of its main limitations - how to visualize a large number of data items without hiding the inherent structure they constitute. We solve this problem by constructing clusters and using high precision textures to represent them. We also use transfer functions that operate on the high precision textures in order to highlight different aspects of the cluster characteristics. Providing predefined transfer functions as well as the support to draw customized transfer functions makes it possible to extract different aspects of the data. We also show how feature animation can be used as guidance when simultaneously analysing several clusters. This technique makes it possible to visually represent statistical information about clusters and thus guides the user, making the analysis process more efficient." ] }
1609.05268
2949411531
Parallel coordinate plots (PCPs) are among the most useful techniques for the visualization and exploration of high-dimensional data spaces. They are especially useful for the representation of correlations among the dimensions, which identify relationships and interdependencies between variables. However, within these high-dimensional spaces, PCPs face difficulties in displaying the correlation between combinations of dimensions and generally require additional display space as the number of dimensions increases. In this paper, we present a new technique for high-dimensional data visualization in which a set of low-dimensional PCPs are interactively constructed by sampling user-selected subsets of the high-dimensional data space. In our technique, we first construct a graph visualization of sets of well-correlated dimensions. Users observe this graph and are able to interactively select the dimensions by sampling from its cliques, thereby dynamically specifying the most relevant lower dimensional data to be used for the construction of focused PCPs. Our interactive sampling overcomes the shortcomings of the PCPs by enabling the visualization of the most meaningful dimensions (i.e., the most relevant information) from high-dimensional spaces. We demonstrate the effectiveness of our technique through two case studies, where we show that the proposed interactive low-dimensional space constructions were pivotal for visualizing the high-dimensional data and discovering new patterns.
Several recent studies have applied SPs for the representation of in which each dot in the SP represents a single dimension in the space. @cite_21 @cite_41 presented a dual SP model to visualize both the items and dimensions spaces. Similarly, @cite_29 presented a interactive mechanism to select low-dimensional subspaces on the SP display in which each dot corresponds to a different dimension. We also represent the relationships among the dimensions in a 2D space; however, our technique applies a graph rather than a SP.
{ "cite_N": [ "@cite_41", "@cite_29", "@cite_21" ], "mid": [ "2147512934", "2031281270", "2153312812" ], "abstract": [ "Datasets with a large number of dimensions per data item (hundreds or more) are challenging both for computational and visual analysis. Moreover, these dimensions have different characteristics and relations that result in sub-groups and or hierarchies over the set of dimensions. Such structures lead to heterogeneity within the dimensions. Although the consideration of these structures is crucial for the analysis, most of the available analysis methods discard the heterogeneous relations among the dimensions. In this paper, we introduce the construction and utilization of representative factors for the interactive visual analysis of structures in high-dimensional datasets. First, we present a selection of methods to investigate the sub-groups in the dimension set and associate representative factors with those groups of dimensions. Second, we introduce how these factors are included in the interactive visual analysis cycle together with the original dimensions. We then provide the steps of an analytical procedure that iteratively analyzes the datasets through the use of representative factors. We discuss how our methods improve the reliability and interpretability of the analysis process by enabling more informed selections of computational tools. Finally, we demonstrate our techniques on the analysis of brain imaging study results that are performed over a large group of subjects.", "For high-dimensional data, this work proposes two novel visual exploration methods to gain insights into the data aspect and the dimension aspect of the data. The first is a Dimension Projection Matrix, as an extension of a scatterplot matrix. In the matrix, each row or column represents a group of dimensions, and each cell shows a dimension projection (such as MDS) of the data with the corresponding dimensions. The second is a Dimension Projection Tree, where every node is either a dimension projection plot or a Dimension Projection Matrix. Nodes are connected with links and each child node in the tree covers a subset of the parent node's dimensions or a subset of the parent node's data items. While the tree nodes visualize the subspaces of dimensions or subsets of the data items under exploration, the matrix nodes enable cross-comparison between different combinations of subspaces. Both Dimension Projection Matrix and Dimension Project Tree can be constructed algorithmically through automation, or manually through user interaction. Our implementation enables interactions such as drilling down to explore different levels of the data, merging or splitting the subspaces to adjust the matrix, and applying brushing to select data clusters. Our method enables simultaneously exploring data correlation and dimension correlation for data with high dimensions.", "In many application fields, data analysts have to deal with datasets that contain many expressions per item. The effective analysis of such multivariate datasets is dependent on the user's ability to understand both the intrinsic dimensionality of the dataset as well as the distribution of the dependent values with respect to the dimensions. In this paper, we propose a visualization model that enables the joint interactive visual analysis of multivariate datasets with respect to their dimensions as well as with respect to the actual data values. We describe a dual setting of visualization and interaction in items space and in dimensions space. The visualization of items is linked to the visualization of dimensions with brushing and focus+context visualization. With this approach, the user is able to jointly study the structure of the dimensions space as well as the distribution of data items with respect to the dimensions. Even though the proposed visualization model is general, we demonstrate its application in the context of a DNA microarray data analysis." ] }
1609.05268
2949411531
Parallel coordinate plots (PCPs) are among the most useful techniques for the visualization and exploration of high-dimensional data spaces. They are especially useful for the representation of correlations among the dimensions, which identify relationships and interdependencies between variables. However, within these high-dimensional spaces, PCPs face difficulties in displaying the correlation between combinations of dimensions and generally require additional display space as the number of dimensions increases. In this paper, we present a new technique for high-dimensional data visualization in which a set of low-dimensional PCPs are interactively constructed by sampling user-selected subsets of the high-dimensional data space. In our technique, we first construct a graph visualization of sets of well-correlated dimensions. Users observe this graph and are able to interactively select the dimensions by sampling from its cliques, thereby dynamically specifying the most relevant lower dimensional data to be used for the construction of focused PCPs. Our interactive sampling overcomes the shortcomings of the PCPs by enabling the visualization of the most meaningful dimensions (i.e., the most relevant information) from high-dimensional spaces. We demonstrate the effectiveness of our technique through two case studies, where we show that the proposed interactive low-dimensional space constructions were pivotal for visualizing the high-dimensional data and discovering new patterns.
The technique recently proposed by @cite_12 uses a similar representation to that applied by our technique. They construct a correlation map'' in which the dataset dimensions are represented by dots where the connection between the dots is derived from pairwise correlations. Our technique includes two characteristics that differ fundamentally from the method of @cite_12 . Firstly, the dimension graph in our technique is used as an interactive mechanism to simultaneously control the dimensionality of the set of PCPs, thereby allowing the PCPs to act as a visual representation of a set of low-dimensional subspaces; in contrast, users need to find interesting dimensions and select them individually while using the method of In addition, our technique uses association rule mining to extract low-dimensional subspaces in contrast with the correlation-based technique used by @cite_12 . Our technique can therefore extract complex multi-variate relationships in comparison to pairwise correlations.
{ "cite_N": [ "@cite_12" ], "mid": [ "1980699943" ], "abstract": [ "Correlation analysis can reveal the complex relationships that often exist among the variables in multivariate data. However, as the number of variables grows, it can be difficult to gain a good understanding of the correlation landscape and important intricate relationships might be missed. We previously introduced a technique that arranged the variables into a 2D layout, encoding their pairwise correlations. We then used this layout as a network for the interactive ordering of axes in parallel coordinate displays. Our current work expresses the layout as a correlation map and employs it for visual correlation analysis. In contrast to matrix displays where correlations are indicated at intersections of rows and columns, our map conveys correlations by spatial proximity which is more direct and more focused on the variables in play. We make the following new contributions, some unique to our map: (1) we devise mechanisms that handle both categorical and numerical variables within a unified framework, (2) we achieve scalability for large numbers of variables via a multi-scale semantic zooming approach, (3) we provide interactive techniques for exploring the impact of value bracketing on correlations, and (4) we visualize data relations within the sub-spaces spanned by correlated variables by projecting the data into a corresponding tessellation of the map." ] }
1609.05307
2521872104
Finding the Time-Optimal Parameterization of a Path (TOPP) subject to second-order constraints (e.g. acceleration, torque, contact stability, etc.) is an important and well-studied problem in robotics. In comparison, TOPP subject to third-order constraints (e.g. jerk, torque rate, etc.) has received far less attention and remains largely open. In this paper, we investigate the structure of the TOPP problem with third-order constraints. In particular, we identify two major difficulties: (i) how to smoothly connect optimal profiles, and (ii) how to address singularities, which stop profile integration prematurely. We propose a new algorithm, TOPP3, which addresses these two difficulties and thereby constitutes an important milestone towards an efficient computational solution to TOPP with third-order constraints.
While TOPP with second-order constraints can essentially be considered as solved @cite_19 , the structure of TOPP with third-order constraints is much less well understood. In the sequel, we survey some of the attempts to address TOPP with third-order constraints.
{ "cite_N": [ "@cite_19" ], "mid": [ "2026733661" ], "abstract": [ "Finding the time-optimal parameterization of a given path subject to kinodynamic constraints is an essential component in many robotic theories and applications. The objective of this paper is to provide a general, fast, and robust implementation of this component. For this, we give a complete solution to the issue of dynamic singularities, which are the main cause of failure in existing implementations. We then present an open-source implementation of the algorithm in C++ Python and demonstrate its robustness and speed in various robotics settings." ] }
1609.05317
2951206572
Learning articulated object pose is inherently difficult because the pose is high dimensional but has many structural constraints. Most existing work do not model such constraints and does not guarantee the geometric validity of their pose estimation, therefore requiring a post-processing to recover the correct geometry if desired, which is cumbersome and sub-optimal. In this work, we propose to directly embed a kinematic object model into the deep neutral network learning for general articulated object pose estimation. The kinematic function is defined on the appropriately parameterized object motion variables. It is differentiable and can be used in the gradient descent based optimization in network training. The prior knowledge on the object geometric model is fully exploited and the structure is guaranteed to be valid. We show convincing experiment results on a toy example and the 3D human pose estimation problem. For the latter we achieve state-of-the-art result on Human3.6M dataset.
A widely-used method is to denote the structural points as a linear combination of templates or basis @cite_2 @cite_25 @cite_33 @cite_7 . @cite_7 represent 3D face landmarks by a linear combination of shape bases @cite_28 and expression bases @cite_38 . It learns the shape, expression coefficients and camera view parameters alternatively. @cite_2 express 3D human pose by an over-complex dictionary with a sparse prior, and solve the sparse coding problem with alternating direction method. @cite_25 assign individual camera view parameters for each pose template. The sparse representation is then relaxed to be a convex problem that can be solved efficiently.
{ "cite_N": [ "@cite_38", "@cite_33", "@cite_7", "@cite_28", "@cite_2", "@cite_25" ], "mid": [ "2017107803", "2285449971", "", "", "2039262381", "2951673496" ], "abstract": [ "We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.", "This paper addresses the challenge of 3D full-body human pose estimation from a monocular image sequence. Here, two cases are considered: (i) the image locations of the human joints are provided and (ii) the image locations of joints are unknown. In the former case, a novel approach is introduced that integrates a sparsity-driven 3D geometric prior and temporal smoothness. In the latter case, the former case is extended by treating the image locations of the joints as latent variables. A deep fully convolutional network is trained to predict the uncertainty maps of the 2D joint locations. The 3D pose estimates are realized via an Expectation-Maximization algorithm over the entire sequence, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Empirical evaluation on the Human3.6M dataset shows that the proposed approaches achieve greater 3D pose estimation accuracy over state-of-the-art baselines. Further, the proposed approach outperforms a publicly available 2D pose estimation baseline on the challenging PennAction dataset.", "", "", "Human pose estimation is a key step to action recognition. We propose a method of estimating 3D human poses from a single image, which works in conjunction with an existing 2D pose joint detector. 3D pose estimation is challenging because multiple 3D poses may correspond to the same 2D pose after projection due to the lack of depth information. Moreover, current 2D pose estimators are usually inaccurate which may cause errors in the 3D estimation. We address the challenges in three ways: (i) We represent a 3D pose as a linear combination of a sparse set of bases learned from 3D human skeletons. (ii) We enforce limb length constraints to eliminate anthropomorphically implausible skeletons. (iii) We estimate a 3D pose by minimizing the 1-norm error between the projection of the 3D pose and the corresponding 2D detection. The 1-norm loss term is robust to inaccurate 2D joint estimations. We use the alternating direction method (ADM) to solve the optimization problem efficiently. Our approach outperforms the state-of-the-arts on three benchmark datasets.", "We investigate the problem of estimating the 3D shape of an object, given a set of 2D landmarks in a single image. To alleviate the reconstruction ambiguity, a widely-used approach is to confine the unknown 3D shape within a shape space built upon existing shapes. While this approach has proven to be successful in various applications, a challenging issue remains, i.e., the joint estimation of shape parameters and camera-pose parameters requires to solve a nonconvex optimization problem. The existing methods often adopt an alternating minimization scheme to locally update the parameters, and consequently the solution is sensitive to initialization. In this paper, we propose a convex formulation to address this problem and develop an efficient algorithm to solve the proposed convex program. We demonstrate the exact recovery property of the proposed method, its merits compared to alternative methods, and the applicability in human pose and car shape estimation." ] }
1609.05317
2951206572
Learning articulated object pose is inherently difficult because the pose is high dimensional but has many structural constraints. Most existing work do not model such constraints and does not guarantee the geometric validity of their pose estimation, therefore requiring a post-processing to recover the correct geometry if desired, which is cumbersome and sub-optimal. In this work, we propose to directly embed a kinematic object model into the deep neutral network learning for general articulated object pose estimation. The kinematic function is defined on the appropriately parameterized object motion variables. It is differentiable and can be used in the gradient descent based optimization in network training. The prior knowledge on the object geometric model is fully exploited and the structure is guaranteed to be valid. We show convincing experiment results on a toy example and the 3D human pose estimation problem. For the latter we achieve state-of-the-art result on Human3.6M dataset.
Many approaches @cite_11 @cite_1 @cite_29 store massive examples in a database and perform pose estimation as retrieval, therefore avoiding the difficult pose representation problem. @cite_11 uses a nearest neighbors search of local shape descriptors. @cite_1 proposes a max-margin structured learning framework to jointly embed the image and pose into the same space, and then estimates the pose of a new image by nearest neighbor search in this space. @cite_29 builds an image database with 3D and 2D annotations, and uses a KD-tree to retrieve 3D pose whose 2D projection is similar to the input image. The performance of these approaches highly depends on the quality of the database. The efficiency of nearest neighbor search could be an issue when the database is large.
{ "cite_N": [ "@cite_29", "@cite_1", "@cite_11" ], "mid": [ "2963013806", "2949812103", "" ], "abstract": [ "One major challenge for 3D pose estimation from a single RGB image is the acquisition of sufficient training data. In particular, collecting large amounts of training data that contain unconstrained images and are annotated with accurate 3D poses is infeasible. We therefore propose to use two independent training sources. The first source consists of images with annotated 2D poses and the second source consists of accurate 3D motion capture data. To integrate both sources, we propose a dual-source approach that combines 2D pose estimation with efficient and robust 3D pose retrieval. In our experiments, we show that our approach achieves state-of-the-art results and is even competitive when the skeleton structure of the two sources differ substantially.", "This paper focuses on structured-output learning using deep neural networks for 3D human pose estimation from monocular images. Our network takes an image and 3D pose as inputs and outputs a score value, which is high when the image-pose pair matches and low otherwise. The network structure consists of a convolutional neural network for image feature extraction, followed by two sub-networks for transforming the image features and pose into a joint embedding. The score function is then the dot-product between the image and pose embeddings. The image-pose embedding and score function are jointly trained using a maximum-margin cost function. Our proposed framework can be interpreted as a special form of structured support vector machines where the joint feature space is discriminatively learned using deep neural networks. We test our framework on the Human3.6m dataset and obtain state-of-the-art results compared to other recent methods. Finally, we present visualizations of the image-pose embedding space, demonstrating the network has learned a high-level embedding of body-orientation and pose-configuration.", "" ] }
1609.05317
2951206572
Learning articulated object pose is inherently difficult because the pose is high dimensional but has many structural constraints. Most existing work do not model such constraints and does not guarantee the geometric validity of their pose estimation, therefore requiring a post-processing to recover the correct geometry if desired, which is cumbersome and sub-optimal. In this work, we propose to directly embed a kinematic object model into the deep neutral network learning for general articulated object pose estimation. The kinematic function is defined on the appropriately parameterized object motion variables. It is differentiable and can be used in the gradient descent based optimization in network training. The prior knowledge on the object geometric model is fully exploited and the structure is guaranteed to be valid. We show convincing experiment results on a toy example and the 3D human pose estimation problem. For the latter we achieve state-of-the-art result on Human3.6M dataset.
The human pose estimation problem has been significantly advanced using deep learning since the pioneer deep pose work @cite_26 . All current leading methods are based on deep neutral networks. @cite_16 shows that using 2D heat maps as intermediate supervision can dramatically improve the 2D human part detection results. @cite_5 use an hourglass shaped network to capture both bottom-up and top-down cues for accurate pose detection. @cite_31 shows that directly using a deep residual network (152 layers) @cite_6 is sufficient for high performance part detection. To adopt these fully-convolutional based heat map regression method for 3D pose estimation, an additional model fitting step is used @cite_33 as a post processing. Other approaches directly regress the 2D human pose @cite_26 @cite_30 or 3D human pose @cite_8 @cite_20 @cite_40 . These detection or regression based approaches ignore the prior knowledge of the human model and does not guarantee to preserve the object structure. They sometimes output geometrically invalid poses.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_33", "@cite_8", "@cite_6", "@cite_40", "@cite_5", "@cite_31", "@cite_16", "@cite_20" ], "mid": [ "1537698211", "2113325037", "2285449971", "2293220651", "2949650786", "2270288817", "2950762923", "", "2255781698", "" ], "abstract": [ "Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation.", "We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.", "This paper addresses the challenge of 3D full-body human pose estimation from a monocular image sequence. Here, two cases are considered: (i) the image locations of the human joints are provided and (ii) the image locations of joints are unknown. In the former case, a novel approach is introduced that integrates a sparsity-driven 3D geometric prior and temporal smoothness. In the latter case, the former case is extended by treating the image locations of the joints as latent variables. A deep fully convolutional network is trained to predict the uncertainty maps of the 2D joint locations. The 3D pose estimates are realized via an Expectation-Maximization algorithm over the entire sequence, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Empirical evaluation on the Human3.6M dataset shows that the proposed approaches achieve greater 3D pose estimation accuracy over state-of-the-art baselines. Further, the proposed approach outperforms a publicly available 2D pose estimation baseline on the challenging PennAction dataset.", "In this paper, we propose a deep convolutional neural network for 3D human pose estimation from monocular images. We train the network using two strategies: (1) a multi-task framework that jointly trains pose regression and body part detectors; (2) a pre-training strategy where the pose regressor is initialized using a network trained for body part detection. We compare our network on a large data set and achieve significant improvement over baseline methods. Human pose estimation is a structured prediction problem, i.e., the locations of each body part are highly correlated. Although we do not add constraints about the correlations between body parts to the network, we empirically show that the network has disentangled the dependencies among different body parts, and learned their correlations.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Previous approaches typically compute candidate poses in individual frames and then link them in a post-processing step to resolve ambiguities. By contrast, we directly regress from a spatio-temporal volume of bounding boxes to a 3D pose in the central frame. We further show that, for this approach to achieve its full potential, it is essential to compensate for the motion in consecutive frames so that the subject remains centered. This then allows us to effectively overcome ambiguities and improve upon the state-of-the-art by a large margin on the Human3.6m, HumanEva, and KTH Multiview Football 3D human pose estimation benchmarks.", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a \"stacked hourglass\" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "", "Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.", "" ] }
1609.05314
2522806453
This paper analyzes the connection between the protocol and physical interference models in the setting of Poisson wireless networks. A transmission is successful under the protocol model if there are no interferers within a parameterized guard zone around the receiver, while a transmission is successful under the physical model if the signal to interference plus noise ratio (SINR) at the receiver is above a threshold. The parameterized protocol model forms a family of decision rules for predicting the success or failure of the same transmission attempt under the physical model. For Poisson wireless networks, we employ stochastic geometry to determine the prior, evidence, and posterior distributions associated with this estimation problem. With this in hand, we proceed to develop five sets of results: i) the maximum correlation of protocol and physical model success indicators, ii) the minimum Bayes risk in estimating physical success from a protocol observation, iii) the receiver operating characteristic (ROC) of false rejection (Type I) and false acceptance (Type II) probabilities, iv) the impact of Rayleigh fading vs. no fading on the correlation and ROC, and v) the impact of multiple prior protocol model observations in the setting of a wireless network with a fixed set of nodes in which the nodes employ the slotted Aloha protocol in each time slot.
Several works have explored how to employ the protocol model within the context of scheduling @cite_3 @cite_2 @cite_0 . Hasan and Andrews @cite_3 study the protocol model as a scheduling algorithm in CDMA-based wireless ad hoc networks. They comment that a guard zone around each transmitter induces a natural tradeoff between interference and spatial reuse, affecting higher layer performance metrics such as transmission capacity, and they employ stochastic geometry to derive a guard zone that maximizes transmission capacity. Shi @cite_5 examine the use of the protocol model within a cross-layer optimization framework and provide a strategy for correcting infeasible schedules generated under the protocol model by allowing transmission rate-adaptation to physical model SINR. Zhang @cite_1 analyze the effectiveness of protocol model scheduling using a variety of analytical, simulation, and testbed measurements. This body of work on the protocol model as a scheduling paradigm is distinct from our focus on the protocol model as an interference model of the success or failure of attempted transmissions. Iyer @cite_6 compares several interference models via simulation and qualitatively discusses the sacrifices in accuracy associated with abstracted interference models, including the protocol model.
{ "cite_N": [ "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_2", "@cite_5" ], "mid": [ "2155243002", "2164097314", "2158190269", "", "", "2103827152" ], "abstract": [ "Interference model is the basis of MAC protocol design in wireless networked sensing and control, and it directly affects the efficiency and predictability of wireless messaging. To exploit the strengths of both the physical and the protocol interference models, we analyze how network traffic, link length, and wireless signal attenuation affect the optimal instantiation of the protocol model. We also identify the inherent trade-off between reliability and throughput in the model instantiation. Our analysis sheds light on the open problem of efficiently optimizing the protocol model instantiation. Based on the analytical results, we propose the physical-ratio-K (PRK) interference model as a reliability-oriented instantiation of the protocol model. Via analysis, simulation, and testbed-based measurement, we show that PRK-based scheduling achieves a network throughput very close to (e.g., 95p) what is enabled by physical-model-based scheduling while ensuring the required packet delivery reliability. The PRK model inherits both the high fidelity of the physical model and the locality of the protocol model, thus it is expected to be suitable for distributed protocol design. These findings shed new light on wireless interference models; they also suggest new approaches to MAC protocol design in the presence of uncertainties in network and environmental conditions as well as application QoS requirements.", "In ad hoc networks, it may be helpful to suppress transmissions by nodes around the desired receiver in order to increase the likelihood of successful communication. This paper introduces the concept of a guard zone, defined as the region around each receiver where interfering transmissions are inhibited. Using stochastic geometry, the guard zone size that maximizes the transmission capacity for spread spectrum ad hoc networks is derived - narrowband transmission (spreading gain of unity) is a special case. A large guard zone naturally decreases the interference, but at the cost of inefficient spatial reuse. The derived results provide insight into the design of contention resolution algorithms by quantifying the optimal tradeoff between interference and spatial reuse in terms of the system parameters. A capacity increase relative to random access (ALOHA) in the range of 2 - 100 fold is demonstrated through an optimal guard zone; the capacity increase depending primarily on the required outage probability, as higher required QoS increasingly rewards scheduling. Compared to the ubiquitous carrier sense multiple access (CSMA) which essentially implements a guard zone around the transmitter rather than the receiver - we observe a capacity increase on the order of 30 - 100", "In wireless communications, the desired wireless signal is typically decoded by treating the sum of all the other ongoing signal transmissions as noise. In the networking literature, this phenomenon is typically abstracted using a wireless channel interference model. The level of detail in the interference model, evidently determines the accuracy of the results based upon the model. Several works in the networking literature have made use of simplistic interference models, e.g., fixed ranges for communication and interference, the capture threshold model (used in the ns2 network simulator), the protocol model, and so on. At the same time, fairly complex interference models such as those based on the SINR (signal-to-interference-and-noise ratio) have also been proposed and used. We investigate the impact of the choice of the interference model, on the conclusions that can be drawn regarding the performance of wireless networks, by comparing different wireless interference models. We find that both in the case of random access networks, as well as in the case of scheduled networks (where node transmissions are scheduled to be completely conflict-free), different interference models can produce significantly different results. Therefore, a lot of caution should be exercised before accepting or interpreting results based on simplified interference models. Further, we feel that an SINR-based model is the minimum level of detail that should be employed to model wireless channel interference in a networking context.", "", "", "This paper tries to reconcile the tension between the physical model and the protocol model that have been used to characterize interference relationship in a multihop wireless network. The physical model (a.k.a. signal-to-interference-and-noise ratio model) is widely considered as a reference model for physical layer behavior but its application in multihop wireless networks is limited by its complexity. On the other hand, the protocol model (a.k.a. disk graph model) is simple but there have been doubts on its validity. This paper explores the following fundamental question: How to correctly use the protocol interference model? We show that, in general, solutions obtained under the protocol model may be infeasible and, thus, results based on blind use of protocol model can be misleading. We propose a new concept called \"reality check” and present a method of using a protocol model with reality check for wireless networks. Subsequently, we show that by appropriate setting of the interference range in the protocol model, it is possible to narrow the solution gap between the two models. Our simulation results confirm that this gap is indeed small (or even negligible). Thus, our methodology of joint reality check and interference range setting retains the protocol model as a viable approach to analyze multihop wireless networks." ] }
1609.05610
2522640479
Learning to rank is a machine learning technique broadly used in many areas such as document retrieval, collaborative filtering or question answering. We present experimental results which suggest that the performance of the current state-of-the-art learning to rank algorithm LambdaMART, when used for document retrieval for search engines, can be improved if standard regression trees are replaced by oblivious trees. This paper provides a comparison of both variants and our results demonstrate that the use of oblivious trees can improve the performance by more than @math . Additional experimental analysis of the influence of a number of features and of a size of the training set is also provided and confirms the desirability of properties of oblivious decision trees.
For instance, LTR problem can be reformulated as a regression task of the relevance label prediction which approaches the problem in a pointwise manner. Each query-document pair is then considered a single data sample and the relations between documents belonging to a particular query are not taken into account. (MSE) is then usually used as the objective function @cite_0 . Random Forest @cite_0 or Multiple Additive Regression Trees (MART) @cite_20 can be utilised to solve the task in the aforementioned manner. Similarly, PRank algorithm proposed in @cite_30 uses a neural network to predict the relevance label. However, the authors of @cite_30 extend the task to ordinal regression, where the relevance score is converted to the relevance class (resp. label) in the end. Besides, there are also pointwise algorithms that treat the problem as a classification problem. For instance, McRank algorithm @cite_7 uses gradient boosting tree algorithm and reformulates the task as a multiple ordinal classification.
{ "cite_N": [ "@cite_0", "@cite_7", "@cite_30", "@cite_20" ], "mid": [ "", "2120391124", "2171541062", "1678356000" ], "abstract": [ "", "We cast the ranking problem as (1) multiple classification (\"Mc\") (2) multiple ordinal classification, which lead to computationally tractable learning algorithms for relevance ranking in Web search. We consider the DCG criterion (discounted cumulative gain), a standard quality measure in information retrieval. Our approach is motivated by the fact that perfect classifications result in perfect DCG scores and the DCG errors are bounded by classification errors. We propose using the Expected Relevance to convert class probabilities into ranking scores. The class probabilities are learned using a gradient boosting tree algorithm. Evaluations on large-scale datasets show that our approach can improve LambdaRank [5] and the regressions-based ranker [6], in terms of the (normalized) DCG scores. An efficient implementation of the boosting tree algorithm is also presented.", "We discuss the problem of ranking instances. In our framework each instance is associated with a rank or a rating, which is an integer from 1 to k. Our goal is to find a rank-predict ion rule that assigns each instance a rank which is as close as possible to the instance's true rank. We describe a simple and efficient online algorithm, analyze its performance in the mistake bound model, and prove its correctness. We describe two sets of experiments, with synthetic data and with the EachMovie dataset for collaborative filtering. In the experiments we performed, our algorithm outperforms online algorithms for regression and classification applied to ranking.", "Function estimation approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest-descent minimization. A general gradient descent boosting paradigm is developed for additive expansions based on any fitting criterion. Specific algorithms are presented for least-squares, least absolute deviation, and Huber-M loss functions for regression, and multiclass logistic likelihood for classification. Special enhancements are derived for the particular case where the individual additive components are regression trees, and tools for interpreting such TreeBoost models are presented. Gradient boosting of regression trees produces competitive, highly robust, interpretable procedures for both regression and classification, especially appropriate for mining less than clean data. Connections between this approach and the boosting methods of Freund and Shapire and Friedman, Hastie and Tibshirani are discussed." ] }
1609.05610
2522640479
Learning to rank is a machine learning technique broadly used in many areas such as document retrieval, collaborative filtering or question answering. We present experimental results which suggest that the performance of the current state-of-the-art learning to rank algorithm LambdaMART, when used for document retrieval for search engines, can be improved if standard regression trees are replaced by oblivious trees. This paper provides a comparison of both variants and our results demonstrate that the use of oblivious trees can improve the performance by more than @math . Additional experimental analysis of the influence of a number of features and of a size of the training set is also provided and confirms the desirability of properties of oblivious decision trees.
Algorithms applying a pairwise approach formalise the problem as classification or regression on pairs of query-documents. As was pointed out in @cite_3 , even though pairwise formalisations benefit from the possibility of using existing classification or regression methods, the results can be suboptimal as the models optimise surrogate loss functions, the computation efficiency can be a problem and the results can be potentially biased towards queries with more documents. @cite_19 , RankingSVM algorithm employs ordinal regression to determine relative relevance of document pairs. RankBoost @cite_22 is a boosting algorithm based on AdaBoost's idea and uses a sequence of weak learners in order to minimise the number of incorrectly ordered pairs. @cite_16 proposed RankNet algorithm that learns a neural network to predict the relevance score of a single query-document in such a way that the score can be used to correctly order any pair of query-document samples. The network is optimised using gradient descent on a probabilistic cost function defined on pairs of documents.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_22", "@cite_3" ], "mid": [ "1508409909", "2143331230", "2107890099", "2108862644" ], "abstract": [ "", "We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data from a commercial internet search engine.", "We study the problem of learning to accurately rank a set of objects by combining a given collection of ranking or preference functions. This problem of combining preferences arises in several applications, such as that of combining the results of different search engines, or the \"collaborative-filtering\" problem of ranking movies for a user based on the movie rankings provided by other users. In this work, we begin by presenting a formal framework for this general problem. We then describe and analyze an efficient algorithm called RankBoost for combining preferences based on the boosting approach to machine learning. We give theoretical results describing the algorithm's behavior both on the training data, and on new test data not seen during training. We also describe an efficient implementation of the algorithm for a particular restricted but common case. We next discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different web search strategies, each of which is a query expansion for a given domain. The second experiment is a collaborative-filtering task for making movie recommendations.", "The paper is concerned with learning to rank, which is to construct a model or a function for ranking objects. Learning to rank is useful for document retrieval, collaborative filtering, and many other applications. Several methods for learning to rank have been proposed, which take object pairs as 'instances' in learning. We refer to them as the pairwise approach in this paper. Although the pairwise approach offers advantages, it ignores the fact that ranking is a prediction task on list of objects. The paper postulates that learning to rank should adopt the listwise approach in which lists of objects are used as 'instances' in learning. The paper proposes a new probabilistic method for the approach. Specifically it introduces two probability models, respectively referred to as permutation probability and top k probability, to define a listwise loss function for learning. Neural Network and Gradient Descent are then employed as model and algorithm in the learning method. Experimental results on information retrieval show that the proposed listwise approach performs better than the pairwise approach." ] }
1609.05610
2522640479
Learning to rank is a machine learning technique broadly used in many areas such as document retrieval, collaborative filtering or question answering. We present experimental results which suggest that the performance of the current state-of-the-art learning to rank algorithm LambdaMART, when used for document retrieval for search engines, can be improved if standard regression trees are replaced by oblivious trees. This paper provides a comparison of both variants and our results demonstrate that the use of oblivious trees can improve the performance by more than @math . Additional experimental analysis of the influence of a number of features and of a size of the training set is also provided and confirms the desirability of properties of oblivious decision trees.
Algorithms taking the whole ranking list into account belong to the group of list-wise algorithms. The approach is straightforward and uses all information about the ranked list to further improve the model. On the other hand, direct optimisation is very challenging. Authors of PermuRank @cite_2 use SVM technique to minimise a hinge loss function on permutations of documents. Similarly, AdaRank @cite_6 repeatedly constructs weak rankers in order to minimise an exponential loss which is derived from the original performance measure. Examples of other algorithms that employ list-wise approach are ListMLE @cite_1 , ListNet @cite_3 , RankCosine @cite_27 , LambdaRank @cite_26 or LambdaMART @cite_9 . Note that LambdaRank was the first algorithm to propose using lambdas to define gradients. LambdaMART algorithm is the main focus of this paper.
{ "cite_N": [ "@cite_26", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_27", "@cite_2" ], "mid": [ "2128877075", "2115584760", "2091158010", "2108862644", "2142537246", "2103179193", "2171749496" ], "abstract": [ "The quality measures used in information retrieval are particularly difficult to optimize directly, since they depend on the model scores only through the sorted order of the documents returned for a given query. Thus, the derivatives of the cost with respect to the model parameters are either zero, or are undefined. In this paper, we propose a class of simple, flexible algorithms, called LambdaRank, which avoids these difficulties by working with implicit cost functions. We describe LambdaRank using neural network models, although the idea applies to any differentiable function class. We give necessary and sufficient conditions for the resulting implicit cost function to be convex, and we show that the general method has a simple mechanical interpretation. We demonstrate significantly improved accuracy, over a state-of-the-art ranking algorithm, on several datasets. We also show that LambdaRank provides a method for significantly speeding up the training phase of that ranking algorithm. Although this paper is directed towards ranking, the proposed method can be extended to any non-smooth and multivariate cost functions.", "LambdaMART is the boosted tree version of LambdaRank, which is based on RankNet. RankNet, LambdaRank, and LambdaMART have proven to be very successful algorithms for solving real world ranking problems: for example an ensemble of LambdaMART rankers won Track 1 of the 2010 Yahoo! Learning To Rank Challenge. The details of these algorithms are spread across several papers and reports, and so here we give a self-contained, detailed and complete description of them.", "This paper aims to conduct a study on the listwise approach to learning to rank. The listwise approach learns a ranking function by taking individual lists as instances and minimizing a loss function defined on the predicted list and the ground-truth list. Existing work on the approach mainly focused on the development of new algorithms; methods such as RankCosine and ListNet have been proposed and good performances by them have been observed. Unfortunately, the underlying theory was not sufficiently studied so far. To amend the problem, this paper proposes conducting theoretical analysis of learning to rank algorithms through investigations on the properties of the loss functions, including consistency, soundness, continuity, differentiability, convexity, and efficiency. A sufficient condition on consistency for ranking is given, which seems to be the first such result obtained in related research. The paper then conducts analysis on three loss functions: likelihood loss, cosine loss, and cross entropy loss. The latter two were used in RankCosine and ListNet. The use of the likelihood loss leads to the development of a new listwise method called ListMLE, whose loss function offers better properties, and also leads to better experimental results.", "The paper is concerned with learning to rank, which is to construct a model or a function for ranking objects. Learning to rank is useful for document retrieval, collaborative filtering, and many other applications. Several methods for learning to rank have been proposed, which take object pairs as 'instances' in learning. We refer to them as the pairwise approach in this paper. Although the pairwise approach offers advantages, it ignores the fact that ranking is a prediction task on list of objects. The paper postulates that learning to rank should adopt the listwise approach in which lists of objects are used as 'instances' in learning. The paper proposes a new probabilistic method for the approach. Specifically it introduces two probability models, respectively referred to as permutation probability and top k probability, to define a listwise loss function for learning. Neural Network and Gradient Descent are then employed as model and algorithm in the learning method. Experimental results on information retrieval show that the proposed listwise approach performs better than the pairwise approach.", "In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.", "Many machine learning technologies such as support vector machines, boosting, and neural networks have been applied to the ranking problem in information retrieval. However, since originally the methods were not developed for this task, their loss functions do not directly link to the criteria used in the evaluation of ranking. Specifically, the loss functions are defined on the level of documents or document pairs, in contrast to the fact that the evaluation criteria are defined on the level of queries. Therefore, minimizing the loss functions does not necessarily imply enhancing ranking performances. To solve this problem, we propose using query-level loss functions in learning of ranking functions. We discuss the basic properties that a query-level loss function should have and propose a query-level loss function based on the cosine similarity between a ranking list and the corresponding ground truth. We further design a coordinate descent algorithm, referred to as RankCosine, which utilizes the proposed loss function to create a generalized additive ranking model. We also discuss whether the loss functions of existing ranking algorithms can be extended to query-level. Experimental results on the datasets of TREC web track, OHSUMED, and a commercial web search engine show that with the use of the proposed query-level loss function we can significantly improve ranking accuracies. Furthermore, we found that it is difficult to extend the document-level loss functions to query-level loss functions.", "One of the central issues in learning to rank for information retrieval is to develop algorithms that construct ranking models by directly optimizing evaluation measures used in information retrieval such as Mean Average Precision (MAP) and Normalized Discounted Cumulative Gain (NDCG). Several such algorithms including SVMmap and AdaRank have been proposed and their effectiveness has been verified. However, the relationships between the algorithms are not clear, and furthermore no comparisons have been conducted between them. In this paper, we conduct a study on the approach of directly optimizing evaluation measures in learning to rank for Information Retrieval (IR). We focus on the methods that minimize loss functions upper bounding the basic loss function defined on the IR measures. We first provide a general framework for the study and analyze the existing algorithms of SVMmap and AdaRank within the framework. The framework is based on upper bound analysis and two types of upper bounds are discussed. Moreover, we show that we can derive new algorithms on the basis of this analysis and create one example algorithm called PermuRank. We have also conducted comparisons between SVMmap, AdaRank, PermuRank, and conventional methods of Ranking SVM and RankBoost, using benchmark datasets. Experimental results show that the methods based on direct optimization of evaluation measures can always outperform conventional methods of Ranking SVM and RankBoost. However, no significant difference exists among the performances of the direct optimization methods themselves." ] }
1609.05610
2522640479
Learning to rank is a machine learning technique broadly used in many areas such as document retrieval, collaborative filtering or question answering. We present experimental results which suggest that the performance of the current state-of-the-art learning to rank algorithm LambdaMART, when used for document retrieval for search engines, can be improved if standard regression trees are replaced by oblivious trees. This paper provides a comparison of both variants and our results demonstrate that the use of oblivious trees can improve the performance by more than @math . Additional experimental analysis of the influence of a number of features and of a size of the training set is also provided and confirms the desirability of properties of oblivious decision trees.
is a special kind of a decision tree with constraints on the selection of a decision rule. Experimental results of Almuallim and Dietterich @cite_18 demonstrated that standard decision trees, e.g. those built using ID3 algorithm, can perform poorly on datasets with many irrelevant features. This problem is addressed by Langley and Sage in @cite_10 where they proposed tackling the problem of irrelevant features by using oblivious decision trees. The constraints on decision rules selection were introduced also by Schlimmer in @cite_11 . Although our modification uses a basic greedy top-down induction of oblivious trees, there have been several methods of oblivious tree construction proposed (see @cite_17 @cite_12 @cite_24 ). Authors of YetiRank algorithm @cite_13 introduced oblivious trees into LTR task. However, YetiRank works in a different way and utilises oblivious trees differently than LambdaMART.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_11", "@cite_24", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2785013514", "23418094", "1539166981", "2103970002", "1557564713", "43005446", "" ], "abstract": [ "The problem of ranking the documents according to their relevance to a given query is a hot topic in information retrieval. Most learning-to-rank methods are supervised and use human editor judgements for learning. In this paper, we introduce novel pairwise method called YetiRank that modifies Friedman's gradient boosting method in part of gradient computation for optimization and takes uncertainty in human judgements into account. Proposed enhancements allowed YetiRank to outperform many state-of-the-art learning to rank methods in offline experiments as well as take the first place in the second track of the Yahoo! learning-to-rank contest. Even more remarkably, the first result in the learning to rank competition that consisted of a transfer learning task was achieved without ever relying on the bigger data from the \"transfer-from\" domain.", "In many domains, an appropriate inductive bias is the MIN-FEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This paper defines and studies this bias. First, it is shown that any learning algorithm implementing the MIN-FEATURES bias requires Θ(1 e ln 1 δ+ 1 e[2p + p ln n]) training examples to guarantee PAC-learning a concept having p relevant features out of n available features. This bound is only logarithmic in the number of irrelevant features. The paper also presents a quasi-polynomial time algorithm, FOCUS, which implements MIN-FEATURES. Experimental studies are presented that compare FOCUS to the ID3 and FRINGE algorithms. These experiments show that-- contrary to expectations--these algorithms do not implement good approximations of MIN-FEATURES. The coverage, sample complexity, and generalization performance of FOCUS is substantially better than either ID3 or FRINGE on learning problems where the MIN-FEATURES bias is appropriate. This suggests that, in practical applications, training data should be preprocessed to remove irrelevant features before being given to ID3 or FRINGE.", "Determinations are a useful type of functional knowledge representation. Applications include knowledge-based systems, analogical reasoning, database design, and robotic sensing systems. This paper presents an efficient, batch algorithm for inducing all minimal determinations from observed data. The algorithm is based on breadth-first search and runs in polynomial time and space given a user-supplied parameter limiting the maximum size of a determination. The algorithm uses probabilistic measures to induce determinations despite noisy data. One key contribution is the identification of an enumeration order in the space of possible determinations that affords a complete and systematic search. Another contribution lists axioms that relate neighboring states and allow the construction of pruning rules. A third contribution formulates a perfect hash function for states in this space and facilitates optimal use of the pruning rules. This paper also sketches an algorithm that can incrementally revise a set of determinations given additional data.", "Decision-tree algorithms are known to be unstable: small variations in the training set can result in different trees and different predictions for the same validation examples. Both accuracy and stability can be improved by learning multiple models from bootstrap samples of training data, but the \"meta-learner\" approach makes the extracted knowledge hardly interpretable. In the following paper, we present the Info-Fuzzy Network (IFN), a novel information-theoretic method for building stable and comprehensible decision-tree models. The stability of the IFN algorithm is ensured by restricting the tree structure to using the same feature for all nodes of the same tree level and by the built-in statistical significance tests. The IFN method is shown empirically to produce more compact and stable models than the \"meta-learner\" techniques, while preserving a reasonable level of predictive accuracy.", "Abstract : In this paper, we address the problem of case-based learning in the presence of irrelevant features. We review previous work on attribute selection and present a new algorithm, OBLIVION, that carries out greedy pruning of oblivious decision trees, which effectively store a set of abstract cases in memory. We hypothesize that this approach will efficiently identify relevant features even when they interact, as in parity concepts. We report experimental results on artificial domains that support this hypothesis, and experiments with natural domains that show improvement in some cases but not others. In closing, we discuss the implications of our experiments, consider additional work on irrelevant features, and outline some directions for future research.", "We describe a supervised learning algorithm, EODG that uses mutual information to build an oblivious decision tree. The tree is then converted to an Oblivious read-Once Decision Graph (OODG) by merging nodes at the same level of the tree. For domains that art appropriate for both decision trees and OODGs, performance is approximately the same as that of C4.5), but the number of nodes in the OODG is much smaller. The merging phase that converts the oblivious decision tree to an OODG provides a new way of dealing with the replication problem and a new pruning mechanism that works top down starting from the root. The pruning mechanism is well suited for finding symmetries and aids in recovering from splits on irrelevant features that may happen during the tree construction.", "" ] }
1609.05365
2951219499
Historically, true context-sensitive parsing has seldom been applied to programming languages, due to its inherent complexity. However, many mainstream programming and markup languages (C, Haskell, Python, XML, and more) possess context-sensitive features. These features are traditionally handled with ad-hoc code (e.g., custom lexers), outside of the scope of parsing theory. Current grammar formalisms struggle to express context-sensitive features. Most solutions lack context transparency: they make grammars hard to write, maintain and compose by hardwiring context through the entire grammar. Instead, we approach context-sensitive parsing through the idea that parsers may recall previously matched input (or data derived therefrom) in order to make parsing decisions. We make use of mutable parse state to enable this form of recall. We introduce principled stateful parsing as a new transactional discipline that makes state changes transparent to parsing mechanisms such as backtracking and memoization. To enforce this discipline, users specify parsers using formally specified primitive state manipulation operations. Our solution is available as a parsing library named Autumn. We illustrate our solution by implementing some practical context-sensitive grammar features such as significant whitespace handling and namespace classification.
Parsing with backtracking semantic actions @cite_6 is an approach that extends a (general) backtracking LR parser with reversible semantic actions. Upon backtracking, state changes are reversed. Two important restrictions apply: state changes can only occur during term reduction, and the state can only affect the parse through semantic conditions that trigger backtracking.
{ "cite_N": [ "@cite_6" ], "mid": [ "2011321377" ], "abstract": [ "Parsing context-dependent computer languages requires an ability to maintain and query data structures while parsing for the purpose of influencing the parse. Parsing ambiguous computer languages requires an ability to generate a parser for arbitrary context-free grammars. In both cases we have tools for generating parsers from a grammar. However, languages that have both of these properties simultaneously are much more difficult to parse. Consequently, we have fewer techniques. One approach to parsing such languages is to endow traditional LR systems with backtracking. This is a step towards a working solution, however there are number of problems. In this work we present two enhancements to a basic backtracking LR approach which enable the parsing of computer languages that are both context-dependent and ambiguous. Using our system we have produced a fast parser for C++ that is composed of strictly a scanner, a name lookup stage and parser generated from a grammar augmented with semantic actions and semantic 'undo' actions. Language ambiguities are resolved by prioritizing grammar declarations." ] }
1609.05365
2951219499
Historically, true context-sensitive parsing has seldom been applied to programming languages, due to its inherent complexity. However, many mainstream programming and markup languages (C, Haskell, Python, XML, and more) possess context-sensitive features. These features are traditionally handled with ad-hoc code (e.g., custom lexers), outside of the scope of parsing theory. Current grammar formalisms struggle to express context-sensitive features. Most solutions lack context transparency: they make grammars hard to write, maintain and compose by hardwiring context through the entire grammar. Instead, we approach context-sensitive parsing through the idea that parsers may recall previously matched input (or data derived therefrom) in order to make parsing decisions. We make use of mutable parse state to enable this form of recall. We introduce principled stateful parsing as a new transactional discipline that makes state changes transparent to parsing mechanisms such as backtracking and memoization. To enforce this discipline, users specify parsers using formally specified primitive state manipulation operations. Our solution is available as a parsing library named Autumn. We illustrate our solution by implementing some practical context-sensitive grammar features such as significant whitespace handling and namespace classification.
Despite these caveats, we consider parsing with backtracking semantic actions @cite_6 to be the safest and most convenient system for context-sensitive parsing among those presented in this section.
{ "cite_N": [ "@cite_6" ], "mid": [ "2011321377" ], "abstract": [ "Parsing context-dependent computer languages requires an ability to maintain and query data structures while parsing for the purpose of influencing the parse. Parsing ambiguous computer languages requires an ability to generate a parser for arbitrary context-free grammars. In both cases we have tools for generating parsers from a grammar. However, languages that have both of these properties simultaneously are much more difficult to parse. Consequently, we have fewer techniques. One approach to parsing such languages is to endow traditional LR systems with backtracking. This is a step towards a working solution, however there are number of problems. In this work we present two enhancements to a basic backtracking LR approach which enable the parsing of computer languages that are both context-dependent and ambiguous. Using our system we have produced a fast parser for C++ that is composed of strictly a scanner, a name lookup stage and parser generated from a grammar augmented with semantic actions and semantic 'undo' actions. Language ambiguities are resolved by prioritizing grammar declarations." ] }