aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1602.07860
2279780532
Submodular function maximization finds application in a variety of real-world decision-making problems. However, most existing methods, based on greedy maximization, assume it is computationally feasible to evaluate F, the function being maximized. Unfortunately, in many realistic settings F is too expensive to evaluate exactly even once. We present probably approximately correct greedy maximization, which requires access only to cheap anytime confidence bounds on F and uses them to prune elements. We show that, with high probability, our method returns an approximately optimal set. We propose novel, cheap confidence bounds for conditional entropy, which appears in many common choices of F and for which it is difficult to find unbiased or bounded estimates. Finally, results on a real-world dataset from a multi-camera tracking system in a shopping mall demonstrate that our approach performs comparably to existing methods, but at a fraction of the computational cost.
Most work on submodular function maximization focuses on algorithms for approximate greedy maximization that minimize the number of evaluations of @math @cite_9 @cite_4 @cite_1 @cite_0 . In particular, @cite_0 randomly sample a subset from @math on each iteration and select the element from this subset that maximizes the marginal gain. Badanidiyuru and Vondr ' a k @cite_1 selects an element on each iteration whose marginal gain exceeds a certain threshold. Other proposed methods that maximize surrogate submodular functions @cite_4 @cite_16 or address streaming @cite_13 or distributed settings @cite_19 , also assume access to exact @math . In contrast, our approach assumes that @math is too expensive to compute even once and works instead with confidence bounds on @math . propose approximating conditional entropy for submodular function maximization while still assuming that they can compute the exact posterior entropies. In our case, computing exact posterior entropy is prohibitively expensive.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_1", "@cite_0", "@cite_19", "@cite_16", "@cite_13" ], "mid": [ "52746146", "1898824936", "645876618", "2950865888", "2170735271", "2139238311", "2158504911" ], "abstract": [ "Motivated by extremely large-scale machine learning problems, we introduce a new multi-stage algorithmic framework for submodular maximization (called MULTGREED), where at each stage we apply an approximate greedy procedure to maximize surrogate submodular functions. The surrogates serve as proxies for a target sub-modular function but require less memory and are easy to evaluate. We theoretically analyze the performance guarantee of the multi-stage framework and give examples on how to design instances of MULTGREED for a broad range of natural submodular functions. We show that MULTGREED performs very closely to the standard greedy algorithm given appropriate surrogate functions and argue how our framework can easily be integrated with distributive algorithms for further optimization. We complement our theory by empirically evaluating on several real-world problems, including data subset selection on millions of speech samples where MULTGREED yields at least a thousand times speedup and superior results over the state-of-the-art selection methods.", "Given a finite set E and a real valued function f on P(E) (the power set of E) the optimal subset problem (P) is to find S ⊂ E maximizing f over P(E). Many combinatorial optimization problems can be formulated in these terms. Here, a family of approximate solution methods is studied : the greedy algorithms.", "From a leading computer scientist, a unifying theory that will revolutionize our understanding of how life evolves and learns. How does life prosper in a complex and erratic world? While we know that nature follows patternssuch as the law of gravityour everyday lives are beyond what known science can predict. We nevertheless muddle through even in the absence of theories of how to act. But how do we do it? In Probably Approximately Correct, computer scientist Leslie Valiant presents a masterful synthesis of learning and evolution to show how both individually and collectively we not only survive, but prosper in a world as complex as our own. The key is probably approximately correct algorithms, a concept Valiant developed to explain how effective behavior can be learned. The model shows that pragmatically coping with a problem can provide a satisfactory solution in the absence of any theory of the problem. After all, finding a mate does not require a theory of mating. Valiants theory reveals the shared computational nature of evolution and learning, and sheds light on perennial questions such as nature versus nurture and the limits of artificial intelligence. Offering a powerful and elegant model that encompasses lifes complexity, Probably Approximately Correct has profound implications for how we think about behavior, cognition, biological evolution, and the possibilities and limits of human and machine intelligence.", "Is it possible to maximize a monotone submodular function faster than the widely used lazy greedy algorithm (also known as accelerated greedy), both in theory and practice? In this paper, we develop the first linear-time algorithm for maximizing a general monotone submodular function subject to a cardinality constraint. We show that our randomized algorithm, STOCHASTIC-GREEDY, can achieve a @math approximation guarantee, in expectation, to the optimum solution in time linear in the size of the data and independent of the cardinality constraint. We empirically demonstrate the effectiveness of our algorithm on submodular functions arising in data summarization, including training large-scale kernel methods, exemplar-based clustering, and sensor placement. We observe that STOCHASTIC-GREEDY practically achieves the same utility value as lazy greedy but runs much faster. More surprisingly, we observe that in many practical scenarios STOCHASTIC-GREEDY does not evaluate the whole fraction of data points even once and still achieves indistinguishable results compared to lazy greedy.", "Many large-scale machine learning problems (such as clustering, non-parametric learning, kernel machines, etc.) require selecting, out of a massive data set, a manageable yet representative subset. Such problems can often be reduced to maximizing a submodular set function subject to cardinality constraints. Classical approaches require centralized access to the full data set; but for truly large-scale problems, rendering the data centrally is often impractical. In this paper, we consider the problem of submodular function maximization in a distributed fashion. We develop a simple, two-stage protocol GREEDI, that is easily implemented using MapReduce style computations. We theoretically analyze our approach, and show, that under certain natural conditions, performance close to the (impractical) centralized approach can be achieved. In our extensive experiments, we demonstrate the effectiveness of our approach on several applications, including sparse Gaussian process inference and exemplar-based clustering, on tens of millions of data points using Hadoop.", "Active learning can lead to a dramatic reduction in labeling effort. However, in many practical implementations (such as crowdsourcing, surveys, high-throughput experimental design), it is preferable to query labels for batches of examples to be labelled in parallel. While several heuristics have been proposed for batch-mode active learning, little is known about their theoretical performance. We consider batch mode active learning and more general information-parallel stochastic optimization problems that exhibit adaptive submodularity, a natural diminishing returns condition. We prove that for such problems, a simple greedy strategy is competitive with the optimal batch-mode policy. In some cases, surprisingly, the use of batches incurs competitively low cost, even when compared to a fully sequential strategy. We demonstrate the effectiveness of our approach on batch-mode active learning tasks, where it outperforms the state of the art, as well as the novel problem of multi-stage influence maximization in social networks.", "We consider the problem of extracting informative exemplars from a data stream. Examples of this problem include exemplar-based clustering and nonparametric inference such as Gaussian process regression on massive data sets. We show that these problems require maximization of a submodular function that captures the informativeness of a set of exemplars, over a data stream. We develop an efficient algorithm, Stream-Greedy, which is guaranteed to obtain a constant fraction of the value achieved by the optimal solution to this NP-hard optimization problem. We extensively evaluate our algorithm on large real-world data sets." ] }
1602.07860
2279780532
Submodular function maximization finds application in a variety of real-world decision-making problems. However, most existing methods, based on greedy maximization, assume it is computationally feasible to evaluate F, the function being maximized. Unfortunately, in many realistic settings F is too expensive to evaluate exactly even once. We present probably approximately correct greedy maximization, which requires access only to cheap anytime confidence bounds on F and uses them to prune elements. We show that, with high probability, our method returns an approximately optimal set. We propose novel, cheap confidence bounds for conditional entropy, which appears in many common choices of F and for which it is difficult to find unbiased or bounded estimates. Finally, results on a real-world dataset from a multi-camera tracking system in a shopping mall demonstrate that our approach performs comparably to existing methods, but at a fraction of the computational cost.
Finally, greedy maximization is known to be @cite_24 @cite_20 : if instead of selecting @math , we selects @math such that @math , the total error is bounded by @math . We exploit this property in our method but use confidence bounds to introduce a probabilistic element, such that with high probability @math .
{ "cite_N": [ "@cite_24", "@cite_20" ], "mid": [ "2121671791", "2500139799" ], "abstract": [ "We present an algorithm for solving a broad class of online resource allocation problems. Our online algorithm can be applied in environments where abstract jobs arrive one at a time, and one can complete the jobs by investing time in a number of abstract activities, according to some schedule. We assume that the fraction of jobs completed by a schedule is a monotone, submodular function of a set of pairs (v, τ), where τ is the time invested in activity v. Under this assumption, our online algorithm performs near-optimally according to two natural metrics: (i) the fraction of jobs completed within time T, for some fixed deadline T > 0, and (ii) the average time required to complete each job. We evaluate our algorithm experimentally by using it to learn, online, a schedule for allocating CPU time among solvers entered in the 2007 SAT solver competition.", "Submodularity is a property of set functions with deep theoretical consequences and far– reaching applications. At first glance it appears very similar to concavity, in other ways it resembles convexity. It appears in a wide variety of applications: in Computer Science it has recently been identified and utilized in domains such as viral marketing (, 2003), information gathering (Krause and Guestrin, 2007), image segmentation (Boykov and Jolly, 2001; , 2009; Jegelka and Bilmes, 2011a), document summarization (Lin and Bilmes, 2011), and speeding up satisfiability solvers (Streeter and Golovin, 2008). In this survey we will introduce submodularity and some of its generalizations, illustrate how it arises in various applications, and discuss algorithms for optimizing submodular functions. Our emphasis here is on maximization; there are many important results and applications related to minimizing submodular functions that we do not cover." ] }
1602.07873
2951527970
In this work we explore the previously proposed approach of direct blind deconvolution and denoising with convolutional neural networks in a situation where the blur kernels are partially constrained. We focus on blurred images from a real-life traffic surveillance system, on which we, for the first time, demonstrate that neural networks trained on artificial data provide superior reconstruction quality on real images compared to traditional blind deconvolution methods. The training data is easy to obtain by blurring sharp photos from a target system with a very rough approximation of the expected blur kernels, thereby allowing custom CNNs to be trained for a specific application (image content and blur range). Additionally, we evaluate the behavior and limits of the CNNs with respect to blur direction range and length.
Modern blind deconvolution methods select a suitable data prior, which is transformed into a simple regularizer in an optimization problem which is solved by alternately estimating the blur kernel @math and the latent image @math @cite_0 . Some of the existing methods rely on rather ad-hoc sharpening steps which seem to be crucial for them to work @cite_6 @cite_17 .
{ "cite_N": [ "@cite_0", "@cite_6", "@cite_17" ], "mid": [ "2047123483", "2161804069", "" ], "abstract": [ "We propose a simple yet effective 0-regularized prior based on intensity and gradient for text image deblurring. The proposed image prior is motivated by observing distinct properties of text images. Based on this prior, we develop an efficient optimization method to generate reliable intermediate results for kernel estimation. The proposed method does not require any complex filtering strategies to select salient edges which are critical to the state-of-the-art deblurring algorithms. We discuss the relationship with other deblurring algorithms based on edge selection and provide insight on how to select salient edges in a more principled way. In the final latent image restoration step, we develop a simple method to remove artifacts and render better deblurred images. Experimental results demonstrate that the proposed algorithm performs favorably against the state-of-the-art text image deblurring methods. In addition, we show that the proposed method can be effectively applied to deblur low-illumination images.", "Blind deconvolution is the recovery of a sharp version of a blurred image when the blur kernel is unknown. Recent algorithms have afforded dramatic progress, yet many aspects of the problem remain challenging and hard to understand. The goal of this paper is to analyze and evaluate recent blind deconvolution algorithms both theoretically and experimentally. We explain the previously reported failure of the naive MAP approach by demonstrating that it mostly favors no-blur explanations. We show that, using reasonable image priors, a naive simulations MAP estimation of both latent image and blur kernel is guaranteed to fail even with infinitely large images sampled from the prior. On the other hand, we show that since the kernel size is often smaller than the image size, a MAP estimation of the kernel alone is well constrained and is guaranteed to succeed to recover the true blur. The plethora of recent deconvolution techniques makes an experimental evaluation on ground-truth data important. As a first step toward this experimental evaluation, we have collected blur data with ground truth and compared recent algorithms under equal settings. Additionally, our data demonstrate that the shift-invariant blur assumption made by most algorithms is often violated.", "" ] }
1602.07873
2951527970
In this work we explore the previously proposed approach of direct blind deconvolution and denoising with convolutional neural networks in a situation where the blur kernels are partially constrained. We focus on blurred images from a real-life traffic surveillance system, on which we, for the first time, demonstrate that neural networks trained on artificial data provide superior reconstruction quality on real images compared to traditional blind deconvolution methods. The training data is easy to obtain by blurring sharp photos from a target system with a very rough approximation of the expected blur kernels, thereby allowing custom CNNs to be trained for a specific application (image content and blur range). Additionally, we evaluate the behavior and limits of the CNNs with respect to blur direction range and length.
Several authors aimed to adapt general blind deconvolution methods to license plates. Fang @cite_2 proposed a new regularization term based on intensities and gradients. Song @cite_13 fuse L0-regularized deblurring with character recognition. Hsieh @cite_16 skip deconvolution and rather focus on recognition of blurred characters.
{ "cite_N": [ "@cite_16", "@cite_13", "@cite_2" ], "mid": [ "", "1510445798", "1614312083" ], "abstract": [ "", "Nowadays, drive recorders are becoming a popular form of evidences used by drivers and accepted by court. One common investigation task is to identify vehicles of interest and recognize their license plates (LPs). In this paper, we focus on License Plate Recognition (LPR) based on single snapshot from a drive recorder. As drive recorders are installed on moving vehicles, snapshots by drive recorders usually suffer from serious blur, and the key issue is recognizing the Blurred License Plate (BLP) from single image. A straightforward method is first deblurring the BLP and then recognizing it. However, the first problem with this method is that general image deblurring methods are designed to get a good overall visual effect and the deblurred results may be not good for LPR. The second problem is that general image deblurring methods don't use the features of the LPs, which could be important priors for the deblurring process. To overcome these issues, this paper proposes a novel method that integrates deblurring and recognizing in a closed-loop. The proposed method utilizes characters and patterns of LPs as priors, and the deblurring and recognizing process will stop when a reliable recognition result is obtained from the deblurred image. Furthermore, by analyzing the features of BLPs, this paper proposes a l 0 -norm based deblurring method. Experiments show that, compared to other LPR methods, the proposed method can achieve higher recognition rate on the BLPs.", "The principal purpose of this paper is to develop a new method to deblur licence plate images. The statistical analyses of plate images we have performed enable us to believe that the binarization threshold is a reasonable parameter to distinguish blurred plate images from clean ones. Our approach defines a new regularization term which includes both intensity and gradient priors, and gives an effective and convergent solution. A large number of experiments on real-world plate images dataset have been conducted by using different algorithms. Comparing with other representative deblur algorithms, the method we proposed yields higher quality results. Moreover, further experiments are carried out to apply our algorithm to non-plate blurred images, such as composite images, saturated images and other common images. The results demonstrate the our method has a state-of-the-art performance on both document and non-document images." ] }
1602.07731
2278865433
The massive amounts of bandwidth available at millimeter-wave frequencies (above 10 GHz) have the potential to greatly increase the capacity of fifth generation cellular wireless systems. However, to overcome the high isotropic propagation loss experienced at these frequencies, highly directional antennas will be required at both the base station and the mobile terminal to achieve sufficient link budget in wide area networks. This reliance on directionality has important implications for control layer procedures. In particular, initial access can be significantly delayed due to the need for the base station and the user to find the proper alignment for directional transmission and reception. This article provides a survey of several recently proposed techniques for this purpose. A coverage and delay analysis is performed to compare various techniques including exhaustive and iterative search, and context-information-based algorithms. We show that the best strategy depends on the target SNR regime, and provide guidelines to characterize the optimal choice as a function of the system parameters.
The initial access problem in mm-Wave cellular networks has been considered, for example, in @cite_9 , where the authors proposed an exhaustive method to sequentially scan the @math angular space. In @cite_3 , a directional cell discovery procedure is proposed, where base stations periodically transmit synchronization signals, potentially in time-varying random directions, to scan the angular space. Initial access design options are also compared in @cite_7 , considering different scanning and signaling procedures, to evaluate access delay and system overhead; the analysis demonstrates significant benefits of low-resolution fully digital architectures in comparison to single stream analog beamforming. Additionally, in order to alleviate the exhaustive search delay issue, paper @cite_12 presents a two-phase hierarchical procedure, where a faster user discovery technique is implemented.
{ "cite_N": [ "@cite_12", "@cite_9", "@cite_7", "@cite_3" ], "mid": [ "1987804395", "2035330915", "2964289208", "779733492" ], "abstract": [ "Cellular systems were designed for carrier frequencies in the microwave band (below 3 GHz) but will soon be operating in frequency bands up to 6 GHz. To meet the ever increasing demands for data, deployments in bands above 6 GHz, and as high as 75 GHz, are envisioned. However, as these systems migrate beyond the microwave band, certain channel characteristics can impact their deployment, especially the coverage range. To increase coverage, beamforming can be used but this role of beamforming is different than in current cellular systems, where its primary role is to improve data throughput. Because cellular procedures enable beamforming after a user establishes access with the system, new procedures are needed to enable beamforming during cell discovery and acquisition. This paper discusses several issues that must be resolved in order to use beamforming for access at millimeter wave (mmWave) frequencies, and presents solutions for initial access. Several approaches are verified by computer simulations, and it is shown that reliable network access and satisfactory coverage can be achieved in mmWave frequencies.", "With the formidable growth of various booming wireless communication services that require ever increasing data throughputs, the conventional microwave band below 10 GHz, which is currently used by almost all mobile communication systems, is going to reach its saturation point within just a few years. Therefore, the attention of radio system designers has been pushed toward ever higher segments of the frequency spectrum in a quest for increased capacity. In this article we investigate the feasibility, advantages, and challenges of future wireless communications over the Eband frequencies. We start with a brief review of the history of the E-band spectrum and its light licensing policy as well as benefits challenges. Then we introduce the propagation characteristics of E-band signals, based on which some potential fixed and mobile applications at the E-band are investigated. In particular, we analyze the achievability of a nontrivial multiplexing gain in fixed point-to-point E-band links, and propose an E-band mobile broadband (EMB) system as a candidate for the next generation mobile communication networks. The channelization and frame structure of the EMB system are discussed in detail.", "Communication in millimeter (mmWave) bands seems an evermore promising prospect for new generation cellular systems. However, due to high isotropic pathloss at these frequencies the use of directional antennas becomes mandatory. Directivity complicates many system design issues that are trivial in current cellular implementations. One such issue is initial access, i.e., the establishment of a link-layer connection between a UE and a base station. Based on different combinations of beamforming architectures and transmission modes, we present a series of design options for initial access in mmWave and compare them in terms of delay performance. We show that the use of digital beamforming for initial access will expedite the whole process significantly. Also, we argue that low quantization digital beamforming can more than compensate for high power consumption.", "The acute disparity between increasing bandwidth demand and available spectrum has brought millimeter wave (mmWave) bands to the forefront of candidate solutions for the next-generation cellular networks. Highly directional transmissions are essential for cellular communication in these frequencies to compensate for higher isotropic path loss. This reliance on directional beamforming, however, complicates initial cell search since mobiles and base stations must jointly search over a potentially large angular directional space to locate a suitable path to initiate communication. To address this problem, this paper proposes a directional cell discovery procedure where base stations periodically transmit synchronization signals, potentially in time-varying random directions, to scan the angular space. Detectors for these signals are derived based on a Generalized Likelihood Ratio Test (GLRT) under various signal and receiver assumptions. The detectors are then simulated under realistic design parameters and channels based on actual experimental measurements at 28 GHz in New York City. The study reveals two key findings: 1) digital beamforming can significantly outperform analog beamforming even when digital beamforming uses very low quantization to compensate for the additional power requirements and 2) omnidirectional transmissions of the synchronization signals from the base station generally outperform random directional scanning." ] }
1602.07731
2278865433
The massive amounts of bandwidth available at millimeter-wave frequencies (above 10 GHz) have the potential to greatly increase the capacity of fifth generation cellular wireless systems. However, to overcome the high isotropic propagation loss experienced at these frequencies, highly directional antennas will be required at both the base station and the mobile terminal to achieve sufficient link budget in wide area networks. This reliance on directionality has important implications for control layer procedures. In particular, initial access can be significantly delayed due to the need for the base station and the user to find the proper alignment for directional transmission and reception. This article provides a survey of several recently proposed techniques for this purpose. A coverage and delay analysis is performed to compare various techniques including exhaustive and iterative search, and context-information-based algorithms. We show that the best strategy depends on the target SNR regime, and provide guidelines to characterize the optimal choice as a function of the system parameters.
On the other hand, Context Information based procedures aim at exploiting knowledge about user and or BS positions, which are provided by a separate control plane, in order to improve the cell discovery procedure and minimize the delay @cite_0 . In @cite_13 , booster cells (operating at mm-Waves) are deployed under the coverage of an anchor cell (operating at microwaves). The anchor BS gets control over IA informing the booster BS about user locations, in order to enable mm-Wave cells to directly steer towards the user position. Furthermore, in @cite_6 , an evolution of @cite_0 is presented, showing how to capture the effects of position inaccuracy and obstacles. Finally, in @cite_10 , the authors study how the performance of analog beamforming degrades in case of angular errors in the available Context Information during the initial cell search.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_13", "@cite_6" ], "mid": [ "2964273971", "2962877103", "2049501010", "2183911578" ], "abstract": [ "The exploitation of the mm-wave bands is one of the most promising solutions for 5G mobile radio networks. However, the use of mm-wave technologies in cellular networks is not straightforward due to mm-wave severe propagation conditions that limit access availability. In order to overcome this obstacle, hybrid network architectures are being considered where mm-wave small cells can exploit an overlay coverage layer based on legacy technology. The additional mm-wave layer can also take advantage of a functional split between control and user plane, that allows to delegate most of the signaling functions to legacy base stations and to gather context information from users for resource optimization. However, mm-wave technology requires multiple antennas and highly directional transmissions to compensate for high path loss and limited power. Directional transmissions must be also used for the cell discovery and synchronization process, and this can lead to a non negligible delay due to need to scan the cell area with multiple transmissions in different angles. In this paper, we propose to exploit the context information related to user position, provided by the separated control plane, to improve the cell discovery procedure and minimize delay. We investigate the fundamental trade-offs of the cell discovery process with directional antennas and the effects of the context information accuracy on its performance. Numerical results are provided to validate our observations.", "Millimeter wave (mmWave) communication is envisioned as a cornerstone to fulfill the data rate requirements for fifth generation (5G) cellular networks. In mmWave communication, beamforming is considered as a key technology to combat the high path-loss, and unlike in conventional microwave communication, beamforming may be necessary even during initial access cell search. Among the proposed beamforming schemes for initial cell search, analog beamforming is a power efficient approach but suffers from its inherent search delay during initial access. In this work, we argue that analog beamforming can still be a viable choice when context information about mmWave base stations (BS) is available at the mobile station (MS). We then study how the performance of analog beamforming degrades in case of angular errors in the available context information. Finally, we present an analog beamforming receiver architecture that uses multiple arrays of Phase Shifters and a single RF chain to combat the effect of angular errors, showing that it can achieve the same performance as hybrid beamforming.", "Communication in millimeter wave (mmWave) spectrum has gained an increasing interests for tackling the spectrum crunch problem and meeting the high network capacity demand in 4G and beyond. Considering the channel characteristics of mmWave bands, it can be fit into heterogeneous networks (HetNet) for boosting local-area data rate. In this paper, we investigate the challenges in deploying an anchor-booster based HetNet with mmWave capable booster cells. We show that due to the channel characteristics of mmWave bands, there could be a mismatch between the discoverable coverage area of booster cell at mmWave band and the actual supportable coverage area. Numerical results are provided in validating the observation. We suggest possible ways in addressing the coverage mismatch problem. This work provides insights on the deployment and implementation challenges in mmWave capable HetNets.", "With the advent of next-generation mobile devices, wireless networks must be upgraded to fill the gap between huge user data demands and scarce channel capacity. Mm-waves technologies appear as the key-enabler for the future 5G networks design, exhibiting large bandwidth availability and high data rate. As counterpart, the small wave-length incurs in a harsh signal propagation that limits the transmission range. To overcome this limitation, array of antennas with a relatively high number of small elements are used to exploit beamforming techniques that greatly increase antenna directionality both at base station and user terminal. These very narrow beams are used during data transfer and tracking techniques dynamically adapt the direction according to terminal mobility. During cell discovery when initial synchronization must be acquired, however, directionality can delay the process since the best direction to point the beam is unknown. All space must be scanned using the tradeoff between beam width and transmission range. Some support to speed up the cell search process can come from the new architectures for 5G currently being investigated, where conventional wireless network and mm-waves technologies coexist. In these architecture a functional split between C-plane and U-plane allows to guarantee the continuous availability of a signaling channel through conventional wireless technologies with the opportunity to convey context information from users to network. In this paper, we investigate the use of position information provided by user terminals in order to improve the performance of the cell search process. We analyze mm-wave propagation environment and show how it is possible to take into account of position inaccuracy and reflected rays in presence of obstacles." ] }
1602.07731
2278865433
The massive amounts of bandwidth available at millimeter-wave frequencies (above 10 GHz) have the potential to greatly increase the capacity of fifth generation cellular wireless systems. However, to overcome the high isotropic propagation loss experienced at these frequencies, highly directional antennas will be required at both the base station and the mobile terminal to achieve sufficient link budget in wide area networks. This reliance on directionality has important implications for control layer procedures. In particular, initial access can be significantly delayed due to the need for the base station and the user to find the proper alignment for directional transmission and reception. This article provides a survey of several recently proposed techniques for this purpose. A coverage and delay analysis is performed to compare various techniques including exhaustive and iterative search, and context-information-based algorithms. We show that the best strategy depends on the target SNR regime, and provide guidelines to characterize the optimal choice as a function of the system parameters.
In @cite_2 , we presented a comparison between the exhaustive and the iterative techniques. In this work, we expand the analysis to a CI-based algorithm and describe a proposed enhancement. Our goal is to compare multiple IA procedures under an overhead constraint and to derive the best trade-offs, in terms of both misdetection probability and discovery delay, when considering a realistic dense, urban, multi-path scenario.
{ "cite_N": [ "@cite_2" ], "mid": [ "2345176225" ], "abstract": [ "The millimeter wave frequencies (roughly above 10 GHz) offer the availability of massive bandwidth to greatly increase the capacity of fifth generation (5G) cellular wireless systems. However, to overcome the high isotropic pathloss at these frequencies, highly directional transmissions will be required at both the base station (BS) and the mobile user equipment (UE) to establish sufficient link budget in wide area networks. This reliance on directionality has important implications for control layer procedures. Initial access in particular can be significantly delayed due to the need for the BS and the UE to find the initial directions of transmission. This paper provides a survey of several recently proposed techniques. Detection probability and delay analysis is performed to compare various techniques including exhaustive and iterative search. We show that the optimal strategy depends on the target SNR regime." ] }
1602.07715
2274438912
This paper addresses the problem of determining the distance between two regular languages. It will show how to expand Jaccard distance, which works on finite sets, to potentially-infinite regular languages. The entropy of a regular language plays a large role in the extension. Much of the paper is spent investigating the entropy of a regular language. This includes addressing issues that have required previous authors to rely on the upper limit of Shannon's traditional formulation of entropy, because its limit does not always exist. The paper also includes proposing a new limit based formulation for the entropy of a regular language and proves that formulation to both exist and be equivalent to Shannon's original formulation (when it exists). Additionally, the proposed formulation is shown to equal an analogous but formally quite different notion of topological entropy from Symbolic Dynamics -- consequently also showing Shannon's original formulation to be equivalent to topological entropy. Surprisingly, the natural Jaccard-like entropy distance is trivial in most cases. Instead, the entropy sum distance metric is suggested, and shown to be granular in certain situations.
Chomsky and Miller's seminal paper on regular languages @cite_21 does not address distances between regular languages. It uses Shannon's notion of channel capacity (equation 7 from @cite_21 ) for the entropy of a regular language: @math
{ "cite_N": [ "@cite_21" ], "mid": [ "2017473312" ], "abstract": [ "A finite state language is a finite or infinite set of strings (sentences) of symbols (words) generated by a finite set of rules (the grammar), where each rule specifies the state of the system in which it can be applied, the symbol which is generated, and the state of the system after the rule is applied. A number of equivalent descriptions of finite state languages are explored. A simple structural characterization theorem for finite state languages is established, based on the cyclical structure of the grammar. It is shown that the complement of any finite state language formed on a given vocabulary of symbols is also a finite state language, and that the union of any two finite state languages formed on a given vocabulary is a finite state language; i.e., the set of all finite state languages that can be formed on a given vocabulary is a Boolean algebra. Procedures for calculating the number of grammatical strings of any given length are also described." ] }
1602.07715
2274438912
This paper addresses the problem of determining the distance between two regular languages. It will show how to expand Jaccard distance, which works on finite sets, to potentially-infinite regular languages. The entropy of a regular language plays a large role in the extension. Much of the paper is spent investigating the entropy of a regular language. This includes addressing issues that have required previous authors to rely on the upper limit of Shannon's traditional formulation of entropy, because its limit does not always exist. The paper also includes proposing a new limit based formulation for the entropy of a regular language and proves that formulation to both exist and be equivalent to Shannon's original formulation (when it exists). Additionally, the proposed formulation is shown to equal an analogous but formally quite different notion of topological entropy from Symbolic Dynamics -- consequently also showing Shannon's original formulation to be equivalent to topological entropy. Surprisingly, the natural Jaccard-like entropy distance is trivial in most cases. Instead, the entropy sum distance metric is suggested, and shown to be granular in certain situations.
While Shannon says: the limit in question will exist as a finite number in most cases of interest'' @cite_0 , that limit does not always exist for regular languages (consider @math ). That fact can be seen as motivating some of the analysis in this paper. Chomsky and Miller also examine the number of sentences up to a given length, foreshadowing some other results in this paper.
{ "cite_N": [ "@cite_0" ], "mid": [ "2041404167" ], "abstract": [ "Scientific knowledge grows at a phenomenal pace--but few books have had as lasting an impact or played as important a role in our modern world as The Mathematical Theory of Communication, published originally as a paper on communication theory more than fifty years ago. Republished in book form shortly thereafter, it has since gone through four hardcover and sixteen paperback printings. It is a revolutionary work, astounding in its foresight and contemporaneity. The University of Illinois Press is pleased and honored to issue this commemorative reprinting of a classic." ] }
1602.07715
2274438912
This paper addresses the problem of determining the distance between two regular languages. It will show how to expand Jaccard distance, which works on finite sets, to potentially-infinite regular languages. The entropy of a regular language plays a large role in the extension. Much of the paper is spent investigating the entropy of a regular language. This includes addressing issues that have required previous authors to rely on the upper limit of Shannon's traditional formulation of entropy, because its limit does not always exist. The paper also includes proposing a new limit based formulation for the entropy of a regular language and proves that formulation to both exist and be equivalent to Shannon's original formulation (when it exists). Additionally, the proposed formulation is shown to equal an analogous but formally quite different notion of topological entropy from Symbolic Dynamics -- consequently also showing Shannon's original formulation to be equivalent to topological entropy. Surprisingly, the natural Jaccard-like entropy distance is trivial in most cases. Instead, the entropy sum distance metric is suggested, and shown to be granular in certain situations.
Several works since Chomsky and Miller have used this same of length exactly @math formula to define the entropy of a regular language @cite_14 @cite_13 @cite_11 . These works define entropy as Chomsky and Miller, but add the caveat that they use the upper limit when the limit does not exist. There is even a paper on non-regular languages that uses the same entropy definition @cite_13 . Here we provide foundation for those works by showing the upper limit to be correct (Theorem ). Further, this paper suggests an equivalent expression for entropy that may be considered more elegant: it is a limit that exists as a finite number for all regular languages.
{ "cite_N": [ "@cite_14", "@cite_13", "@cite_11" ], "mid": [ "1998314653", "", "2035875679" ], "abstract": [ "We use an information-theoretic notion, namely, (Shannon) information rate, to generalize common syntactic similarity metrics (like Hamming distance and longest common subsequences) between strings to ones between languages. We show that the similarity metrics between two regular languages are computable. We further study self-similarity of a regular language under various similarity metrics. As far as semantic similarity is concerned, we study the amplitude of an automaton, which intuitively characterizes how much a typical execution of the automaton fluctuates. Finally, we investigate, through experiments, how to measure similarity between two real-world programs using Lempel-Ziv compression on the runs at the assembly level.", "", "Let L be an irreducible regular language. Let W be a non-empty set of words (or sub-words) of L and denote by LW = v ∈ L:w ⊏ v, ∀w ∈ W the language obtained from L by forbidding all the words w in W. Then the entropy decreases strictly: ent(LW) < ent(L). In this note we present a new proof of this fact, based on a method of Gromov, which avoids the Perron-Frobenius theory. This result applies to the regular languages of finitely generated free groups and an additional application is presented." ] }
1602.07715
2274438912
This paper addresses the problem of determining the distance between two regular languages. It will show how to expand Jaccard distance, which works on finite sets, to potentially-infinite regular languages. The entropy of a regular language plays a large role in the extension. Much of the paper is spent investigating the entropy of a regular language. This includes addressing issues that have required previous authors to rely on the upper limit of Shannon's traditional formulation of entropy, because its limit does not always exist. The paper also includes proposing a new limit based formulation for the entropy of a regular language and proves that formulation to both exist and be equivalent to Shannon's original formulation (when it exists). Additionally, the proposed formulation is shown to equal an analogous but formally quite different notion of topological entropy from Symbolic Dynamics -- consequently also showing Shannon's original formulation to be equivalent to topological entropy. Surprisingly, the natural Jaccard-like entropy distance is trivial in most cases. Instead, the entropy sum distance metric is suggested, and shown to be granular in certain situations.
More recently, Cui directly address distances between regular languages using a generalization of Jaccard distance @cite_14 . That paper usefully expands the concept of Jaccard distance to regular languages by (1) using entropy to handle infinite sized regular languages (they use the upper limit notion of entropy described above), and (2) allowing operations other than intersection to be used in the numerator. Further, Cui suggest and prove properties of several specific distance functions between regular languages. The distance functions in this paper do not generalize the Jaccard distance in the same way, but are proven to be metrics or pseudo-metrics.
{ "cite_N": [ "@cite_14" ], "mid": [ "1998314653" ], "abstract": [ "We use an information-theoretic notion, namely, (Shannon) information rate, to generalize common syntactic similarity metrics (like Hamming distance and longest common subsequences) between strings to ones between languages. We show that the similarity metrics between two regular languages are computable. We further study self-similarity of a regular language under various similarity metrics. As far as semantic similarity is concerned, we study the amplitude of an automaton, which intuitively characterizes how much a typical execution of the automaton fluctuates. Finally, we investigate, through experiments, how to measure similarity between two real-world programs using Lempel-Ziv compression on the runs at the assembly level." ] }
1602.07715
2274438912
This paper addresses the problem of determining the distance between two regular languages. It will show how to expand Jaccard distance, which works on finite sets, to potentially-infinite regular languages. The entropy of a regular language plays a large role in the extension. Much of the paper is spent investigating the entropy of a regular language. This includes addressing issues that have required previous authors to rely on the upper limit of Shannon's traditional formulation of entropy, because its limit does not always exist. The paper also includes proposing a new limit based formulation for the entropy of a regular language and proves that formulation to both exist and be equivalent to Shannon's original formulation (when it exists). Additionally, the proposed formulation is shown to equal an analogous but formally quite different notion of topological entropy from Symbolic Dynamics -- consequently also showing Shannon's original formulation to be equivalent to topological entropy. Surprisingly, the natural Jaccard-like entropy distance is trivial in most cases. Instead, the entropy sum distance metric is suggested, and shown to be granular in certain situations.
There is another proposal for the topological entropy of formal (including regular) languages that does not agree with the notions provided in this paper @cite_9 . That paper develops interesting formal results surrounding their definition of entropy, including showing it to be zero for all regular languages. Because their entropy is zero for all regular languages, it will not be helpful as a distance function for regular languages.
{ "cite_N": [ "@cite_9" ], "mid": [ "1900346975" ], "abstract": [ "We introduce the notion of topological entropy of a formal language as the topological entropy of the minimal topological automaton accepting it. Using a characterization of this notion in terms of approximations of the Myhill–Nerode congruence relation, we are able to compute the topological entropies of certain example languages. Those examples suggest that the notion of a “simple” formal language coincides with the language having zero entropy." ] }
1602.07715
2274438912
This paper addresses the problem of determining the distance between two regular languages. It will show how to expand Jaccard distance, which works on finite sets, to potentially-infinite regular languages. The entropy of a regular language plays a large role in the extension. Much of the paper is spent investigating the entropy of a regular language. This includes addressing issues that have required previous authors to rely on the upper limit of Shannon's traditional formulation of entropy, because its limit does not always exist. The paper also includes proposing a new limit based formulation for the entropy of a regular language and proves that formulation to both exist and be equivalent to Shannon's original formulation (when it exists). Additionally, the proposed formulation is shown to equal an analogous but formally quite different notion of topological entropy from Symbolic Dynamics -- consequently also showing Shannon's original formulation to be equivalent to topological entropy. Surprisingly, the natural Jaccard-like entropy distance is trivial in most cases. Instead, the entropy sum distance metric is suggested, and shown to be granular in certain situations.
Several regular language distance and similarity functions are suggested in @cite_10 . That paper constructs a natural R-tree-like index of regular expressions. The index allows for faster matching of a string against a large number of regular expressions. To construct the index, several distance functions are considered. These include a max-count measure, which considers the number of strings in both languages with length less than some constant to be the languages' similarity; a rate-of-growth measure, which divides the sum of strings sized @math to @math in one language by the sum of strings sized @math to @math in another language; and a minimum description length measure, which computes the number of bits needed to encode a path through an NFA.
{ "cite_N": [ "@cite_10" ], "mid": [ "2126632912" ], "abstract": [ "Abstract.Due to their expressive power, regular expressions (REs) are quickly becoming an integral part of language specifications for several important application scenarios. Many of these applications have to manage huge databases of RE specifications and need to provide an effective matching mechanism that, given an input string, quickly identifies the REs in the database that match it. In this paper, we propose the RE-tree, a novel index structure for large databases of RE specifications. Given an input query string, the RE-tree speeds up the retrieval of matching REs by focusing the search and comparing the input string with only a small fraction of REs in the database. Even though the RE-tree is similar in spirit to other tree-based structures that have been proposed for indexing multidimensional data, RE indexing is significantly more challenging since REs typically represent infinite sets of strings with no well-defined notion of spatial locality. To address these new challenges, our RE-tree index structure relies on novel measures for comparing the relative sizes of infinite regular languages. We also propose innovative solutions for the various RE-tree operations including the effective splitting of RE-tree nodes and computing a \"tight\" bounding RE for a collection of REs. Finally, we demonstrate how sampling-based approximation algorithms can be used to significantly speed up the performance of RE-tree operations. Preliminary experimental results with moderately large synthetic data sets indicate that the RE-tree is effective in pruning the search space and easily outperforms naive sequential search approaches." ] }
1602.07360
2279098554
Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The overarching goal of our work is to identify a model that has very few parameters while preserving accuracy. To address this problem, a sensible approach is to take an existing CNN model and compress it in a lossy fashion. In fact, a research community has emerged around the topic of model compression , and several approaches have been reported. A fairly straightforward approach by Denton al is to apply singular value decomposition (SVD) to a pretrained CNN model @cite_16 . Han al developed Network Pruning, which begins with a pretrained model, then replaces parameters that are below a certain threshold with zeros to form a sparse matrix, and finally performs a few iterations of training on the sparse CNN @cite_22 . Recently, Han al extended their work by combining Network Pruning with quantization (to 8 bits or less) and huffman encoding to create an approach called Deep Compression @cite_21 , and further designed a hardware accelerator called EIE @cite_46 that operates directly on the compressed model, achieving substantial speedups and energy savings.
{ "cite_N": [ "@cite_46", "@cite_16", "@cite_21", "@cite_22" ], "mid": [ "2285660444", "2167215970", "2119144962", "2963674932" ], "abstract": [ "State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power. Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120× energy saving; Exploiting sparsity saves 10×; Weight sharing gives 8×; Skipping zero activations from ReLU saves another 3×. Evaluated on nine DNN benchmarks, EIE is 189× and 13× faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102 GOPS working directly on a compressed network, corresponding to 3 TOPS on an uncompressed network, and processes FC layers of AlexNet at 1.88×104 frames sec with a power dissipation of only 600mW. It is 24,000× and 3,400× more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9×, 19× and 3× better throughput, energy efficiency and area efficiency.", "We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2 x, while keeping the accuracy within 1 of the original model.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy." ] }
1602.07360
2279098554
Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
Convolutions have been used in artificial neural networks for at least 25 years; LeCun al helped to popularize CNNs for digit recognition applications in the late 1980s @cite_49 . In neural networks, convolution filters are typically 3D, with height, width, and channels as the key dimensions. When applied to images, CNN filters typically have 3 channels in their first layer (i.e. RGB), and in each subsequent layer @math the filters have the same number of channels as @math has filters. The early work by LeCun al @cite_49 uses 5x5xChannels From now on, we will simply abbreviate HxWxChannels to HxW. filters, and the recent VGG @cite_19 architectures extensively use 3x3 filters. Models such as Network-in-Network @cite_47 and the GoogLeNet family of architectures @cite_43 @cite_37 @cite_23 @cite_4 use 1x1 filters in some layers.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_19", "@cite_43", "@cite_49", "@cite_23", "@cite_47" ], "mid": [ "2949117887", "2274287116", "1686810756", "2950179405", "2147800946", "2949605076", "" ], "abstract": [ "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.", "Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.", "Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2 top-1 and 5.6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5 top-5 error on the validation set (3.6 error on the test set) and 17.3 top-1 error on the validation set.", "" ] }
1602.07360
2279098554
Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
With the trend of designing very deep CNNs, it becomes cumbersome to manually select filter dimensions for each layer. To address this, various higher level building blocks, or modules , comprised of multiple convolution layers with a specific fixed organization have been proposed. For example, the GoogLeNet papers propose Inception modules , which are comprised of a number of different dimensionalities of filters, usually including 1x1 and 3x3, plus sometimes 5x5 @cite_43 and sometimes 1x3 and 3x1 @cite_23 . Many such modules are then combined, perhaps with additional ad-hoc layers, to form a complete network. We use the term CNN microarchitecture to refer to the particular organization and dimensions of the individual modules.
{ "cite_N": [ "@cite_43", "@cite_23" ], "mid": [ "2950179405", "2949605076" ], "abstract": [ "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2 top-1 and 5.6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5 top-5 error on the validation set (3.6 error on the test set) and 17.3 top-1 error on the validation set." ] }
1602.07360
2279098554
Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
Perhaps the mostly widely studied CNN macroarchitecture topic in the recent literature is the impact of depth (i.e. number of layers) in networks. Simoyan and Zisserman proposed the VGG @cite_19 family of CNNs with 12 to 19 layers and reported that deeper networks produce higher accuracy on the ImageNet-1k dataset @cite_24 . K. He al proposed deeper CNNs with up to 30 layers that deliver even higher ImageNet accuracy @cite_30 .
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_30" ], "mid": [ "2108598243", "1686810756", "1677182931" ], "abstract": [ "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94 top-5 test error on the ImageNet 2012 classification dataset. This is a 26 relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66 [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1 , [26]) on this dataset." ] }
1602.07428
2137916851
Link prediction is a fundamental task in statistical network analysis. Recent advances have been made on learning flexible nonparametric Bayesian latent feature models for link prediction. In this paper, we present a max-margin learning method for such nonparametric latent feature relational models. Our approach attempts to unite the ideas of max-margin learning and Bayesian nonparametrics to discover discriminative latent features for link prediction. It inherits the advances of nonparametric Bayesian methods to infer the unknown latent social dimension, while for discriminative link prediction, it adopts the max-margin learning principle by minimizing a hinge-loss using the linear expectation operator, without dealing with a highly nonlinear link likelihood function. For posterior inference, we develop an efficient stochastic variational inference algorithm under a truncated mean-field assumption. Our methods can scale up to large-scale real networks with millions of entities and tens of millions of positive links. We also provide a full Bayesian formulation, which can avoid tuning regularization hyper-parameters. Experimental results on a diverse range of real datasets demonstrate the benefits inherited from max-margin learning and Bayesian nonparametric inference.
Many scientific and engineering data are represented as networks, such as social networks and biological gene networks. Developing statistical models to analyze such data has attracted a considerable amount of attention, where link prediction is a fundamental task @cite_5 . For static networks, link prediction is defined to predict unobserved links by using the knowledge learned from observed ones, while for dynamic networks, it is defined as learning from the structures up to time @math in order to predict the network structure at time @math . The early work on link prediction has been focused on designing good proximity (or similarity) measures between nodes, using features related to the network topology. The measure scores are used to produce a rank list of candidate link pairs. Popular measures include common neighbors, Jaccard's coefficient @cite_36 , Adamic Adar @cite_7 , and etc. Such methods are unsupervised in the sense that they do not learn models from training links. Supervised learning methods have also been popular for link prediction @cite_9 @cite_39 @cite_30 , which learn predictive models on labeled training data with a set of manually designed features that capture the statistics of the network.
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_36", "@cite_9", "@cite_39", "@cite_5" ], "mid": [ "", "2154454189", "1956559956", "2768375068", "2003707464", "2420733993" ], "abstract": [ "", "Abstract The Internet has become a rich and large repository of information about us as individuals. Anything from the links and text on a user’s homepage to the mailing lists the user subscribes to are reflections of social interactions a user has in the real world. In this paper we devise techniques and tools to mine this information in order to extract social networks and the exogenous factors underlying the networks’ structure. In an analysis of two data sets, from Stanford University and the Massachusetts Institute of Technology (MIT), we show that some factors are better indicators of social connections than others, and that these indicators vary between user populations. Our techniques provide potential applications in automatically inferring real world connections and discovering, labeling, and characterizing communities.", "", "Social network analysis has attracted much attention in recent years. Link prediction is a key research directions within this area. In this research, we study link prediction as a supervised learning task. Along the way, we identify a set of features that are key to the superior performance under the supervised learning setup. The identified features are very easy to compute, and at the same time surprisingly effective in solving the link prediction problem. We also explain the effectiveness of the features from their class density distribution. Then we compare different classes of supervised learning algorithms in terms of their prediction performance using various performance metrics, such as accuracy, precision-recall, F-values, squared error etc. with a 5-fold cross validation. Our results on two practical social network datasets shows that most of the well-known classification algorithms (decision tree, k-nn,multilayer perceptron, SVM, rbf network) can predict link with surpassing performances, but SVM defeats all of them with narrow margin in all different performance measures. Again, ranking of features with popular feature ranking algorithms shows that a small subset of features always plays a significant role in the link prediction job.", "This paper examines important factors for link prediction in networks and provides a general, high-performance framework for the prediction task. Link prediction in sparse networks presents a significant challenge due to the inherent disproportion of links that can form to links that do form. Previous research has typically approached this as an unsupervised problem. While this is not the first work to explore supervised learning, many factors significant in influencing and guiding classification remain unexplored. In this paper, we consider these factors by first motivating the use of a supervised framework through a careful investigation of issues such as network observational period, generality of existing methods, variance reduction, topological causes and degrees of imbalance, and sampling approaches. We also present an effective flow-based predicting algorithm, offer formal bounds on imbalance in sparse network link prediction, and employ an evaluation method appropriate for the observed imbalance. Our careful consideration of the above issues ultimately leads to a completely general framework that outperforms unsupervised link prediction methods by more than 30 AUC.", "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the \"proximity\" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures." ] }
1602.07428
2137916851
Link prediction is a fundamental task in statistical network analysis. Recent advances have been made on learning flexible nonparametric Bayesian latent feature models for link prediction. In this paper, we present a max-margin learning method for such nonparametric latent feature relational models. Our approach attempts to unite the ideas of max-margin learning and Bayesian nonparametrics to discover discriminative latent features for link prediction. It inherits the advances of nonparametric Bayesian methods to infer the unknown latent social dimension, while for discriminative link prediction, it adopts the max-margin learning principle by minimizing a hinge-loss using the linear expectation operator, without dealing with a highly nonlinear link likelihood function. For posterior inference, we develop an efficient stochastic variational inference algorithm under a truncated mean-field assumption. Our methods can scale up to large-scale real networks with millions of entities and tens of millions of positive links. We also provide a full Bayesian formulation, which can avoid tuning regularization hyper-parameters. Experimental results on a diverse range of real datasets demonstrate the benefits inherited from max-margin learning and Bayesian nonparametric inference.
For latent feature models, each entity is assumed to be associated with a feature vector, and the probability of a link is determined by the interactions among the latent features. The latent feature models are more flexible than latent class models, which may need an exponential number of classes in order to be equal on model expressiveness. Representative work in this category includes the latent distance model @cite_38 , the latent eigenmodel @cite_10 , and the nonparametric latent feature relational model (LFRM) @cite_34 . As these methods are closely related to ours, we will provide a detailed discussion of them in next section.
{ "cite_N": [ "@cite_38", "@cite_34", "@cite_10" ], "mid": [ "2066459332", "2158535911", "2159203990" ], "abstract": [ "Network models are widely used to represent relational information among interacting units. In studies of social networks, recent emphasis has been placed on random graph models where the nodes usually represent individual social actors and the edges represent the presence of a specified relation between actors. We develop a class of models where the probability of a relation between actors depends on the positions of individuals in an unobserved “social space.” We make inference for the social space within maximum likelihood and Bayesian frameworks, and propose Markov chain Monte Carlo procedures for making inference on latent positions and the effects of observed covariates. We present analyses of three standard datasets from the social networks literature, and compare the method to an alternative stochastic blockmodeling approach. In addition to improving on model fit for these datasets, our method provides a visual and interpretable model-based spatial representation of social relationships and improv...", "As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. Our model combines these inferred features with known covariates in order to perform link prediction. We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets.", "This article discusses a latent variable model for inference and prediction of symmetric relational data. The model, based on the idea of the eigenvalue decomposition, represents the relationship between two nodes as the weighted inner-product of node-specific vectors of latent characteristics. This \"eigenmodel\" generalizes other popular latent variable models, such as latent class and distance models: It is shown mathematically that any latent class or distance model has a representation as an eigenmodel, but not vice-versa. The practical implications of this are examined in the context of three real datasets, for which the eigenmodel has as good or better out-of-sample predictive performance than the other two models." ] }
1602.07416
2949354227
Memory units have been widely used to enrich the capabilities of deep networks on capturing long-term dependencies in reasoning and prediction tasks, but little investigation exists on deep generative models (DGMs) which are good at inferring high-level invariant representations from unlabeled data. This paper presents a deep generative model with a possibly large external memory and an attention mechanism to capture the local detail information that is often lost in the bottom-up abstraction process in representation learning. By adopting a smooth attention model, the whole network is trained end-to-end by optimizing a variational bound of data likelihood via auto-encoding variational Bayesian methods, where an asymmetric recognition network is learnt jointly to infer high-level invariant representations. The asymmetric architecture can reduce the competition between bottom-up invariant feature extraction and top-down generation of instance details. Our experiments on several datasets demonstrate that memory can significantly boost the performance of DGMs and even achieve state-of-the-art results on various tasks, including density estimation, image generation, and missing value imputation.
In addition to the memory-based models mentioned above, attention mechanisms have been used in other deep models for various tasks, such as image classification @cite_35 @cite_8 , object tracking @cite_3 , conditional caption generation @cite_22 , machine translation @cite_17 and image generation @cite_37 @cite_9 . Recently, DRAW @cite_9 introduces a novel 2-D attention mechanism to decide where to read and write'' on the image and does well in generating objects with clear track, such as handwritten digits and sequences of real digits.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_22", "@cite_8", "@cite_9", "@cite_3", "@cite_17" ], "mid": [ "2141399712", "1810943226", "2950178297", "1484210532", "1850742715", "2951527505", "2133564696" ], "abstract": [ "We describe a model based on a Boltzmann machine with third-order connections that can learn how to accumulate information about a shape over several fixations. The model uses a retina that only has enough high resolution pixels to cover a small area of the image, so it must decide on a sequence of fixations and it must combine the \"glimpse\" at each fixation with the location of the fixation before integrating the information with information from other glimpses of the same object. We evaluate this model on a synthetic dataset and two image classification datasets, showing that it can perform at least as well as a model trained on whole images.", "This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.", "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition." ] }
1602.07416
2949354227
Memory units have been widely used to enrich the capabilities of deep networks on capturing long-term dependencies in reasoning and prediction tasks, but little investigation exists on deep generative models (DGMs) which are good at inferring high-level invariant representations from unlabeled data. This paper presents a deep generative model with a possibly large external memory and an attention mechanism to capture the local detail information that is often lost in the bottom-up abstraction process in representation learning. By adopting a smooth attention model, the whole network is trained end-to-end by optimizing a variational bound of data likelihood via auto-encoding variational Bayesian methods, where an asymmetric recognition network is learnt jointly to infer high-level invariant representations. The asymmetric architecture can reduce the competition between bottom-up invariant feature extraction and top-down generation of instance details. Our experiments on several datasets demonstrate that memory can significantly boost the performance of DGMs and even achieve state-of-the-art results on various tasks, including density estimation, image generation, and missing value imputation.
Compared with previous memory-based networks @cite_11 @cite_10 , we propose to employ an external hierarchical memory to capture variant information at different abstraction levels trained in an unsupervised manner. Besides, our memory cannot be written directly like @cite_11 @cite_10 ; instead it is updated through optimization. Compared with previous DGMs with visual attention @cite_38 @cite_9 , we make different assumptions about the data, i.e., the main object (such as faces) has massive local features, which cannot be modeled by a limited number of latent factors. We employ an external memory to capture this and the associated attention mechanism is used to retrieve the memory, not to learn what-where'' combination on the images. Besides, the external memory used in our model and the memory units of LSTMs used in DRAW @cite_9 can complement each other @cite_11 . Further investigation on DRAW with external memory is our future work.
{ "cite_N": [ "@cite_38", "@cite_9", "@cite_10", "@cite_11" ], "mid": [ "2950181755", "1850742715", "", "2167839676" ], "abstract": [ "Attention has long been proposed by psychologists as important for effectively dealing with the enormous sensory stimulus available in the neocortex. Inspired by the visual attention models in computational neuroscience and the need of object-centric data for generative models, we describe for generative learning framework using attentional mechanisms. Attentional mechanisms can propagate signals from region of interest in a scene to an aligned canonical representation, where generative modeling takes place. By ignoring background clutter, generative models can concentrate their resources on the object of interest. Our model is a proper graphical model where the 2D Similarity transformation is a part of the top-down process. A ConvNet is employed to provide good initializations during posterior inference which is based on Hamiltonian Monte Carlo. Upon learning images of faces, our model can robustly attend to face regions of novel test subjects. More importantly, our model can learn generative models of new faces from a novel dataset of large images where the face locations are not known.", "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.", "", "We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples." ] }
1602.07495
1968329810
During recent years, active learning has evolved into a popular paradigm for utilizing user's feedback to improve accuracy of learning algorithms. Active learning works by selecting the most informative sample among unlabeled data and querying the label of that point from user. Many different methods such as uncertainty sampling and minimum risk sampling have been utilized to select the most informative sample in active learning. Although many active learning algorithms have been proposed so far, most of them work with binary or multi-class classification problems and therefore can not be applied to problems in which only samples from one class as well as a set of unlabeled data are available. Such problems arise in many real-world situations and are known as the problem of learning from positive and unlabeled data. In this paper we propose an active learning algorithm that can work when only samples of one class as well as a set of unlabeled data are available. Our method works by separately estimating probability density of positive and unlabeled points and then computing expected value of in formativeness to get rid of a hyper-parameter and have a better measure of in formativeness. Experiments and empirical analysis show promising results compared to other similar methods.
Many active learning algorithms have been proposed in the literature so far. @cite_6 is a comprehensive survey of recent works in this field. Among the earliest and most popular active learning paradigms is the uncertainty sampling approach which is based on selecting the least confident sample for querying. The definition of confidence depends on the base classifier in use. For example, @cite_0 proposes an active learning approach for SVM which selects for querying the sample which is closest to the separating hyperplane. selecting the sample with minimum margin @cite_12 and the data sample with maximum entropy @cite_2 are other approaches which have been applied to active learning problems.
{ "cite_N": [ "@cite_0", "@cite_12", "@cite_6", "@cite_2" ], "mid": [ "2138079527", "2128518360", "2903158431", "2098742124" ], "abstract": [ "Relevance feedback is often a critical component when designing image databases. With these databases it is difficult to specify queries directly and explicitly. Relevance feedback interactively determinines a user's desired output or query concept by asking the user whether certain proposed images are relevant or not. For a relevance feedback algorithm to be effective, it must grasp a user's query concept accurately and quickly, while also only asking the user to label a small number of images. We propose the use of a support vector machine active learning algorithm for conducting effective relevance feedback for image retrieval. The algorithm selects the most informative images to query a user and quickly learns a boundary that separates the images that satisfy the user's query concept from the rest of the dataset. Experimental results show that our algorithm achieves significantly higher search accuracy than traditional query refinement schemes after just three to four rounds of relevance feedback.", "We present a framework for margin based active learning of linear separators. We instantiate it for a few important cases, some of which have been previously considered in the literature.We analyze the effectiveness of our framework both in the realizable case and in a specific noisy setting related to the Tsybakov small noise condition.", "", "Most methods for learning object categories require large amounts of labeled training data. However, obtaining such data can be a difficult and time-consuming endeavor. We have developed a novel, entropy-based ldquoactive learningrdquo approach which makes significant progress towards this problem. The main idea is to sequentially acquire labeled data by presenting an oracle (the user) with unlabeled images that will be particularly informative when labeled. Active learning adaptively prioritizes the order in which the training examples are acquired, which, as shown by our experiments, can significantly reduce the overall number of training examples required to reach near-optimal performance. At first glance this may seem counter-intuitive: how can the algorithm know whether a group of unlabeled images will be informative, when, by definition, there is no label directly associated with any of the images? Our approach is based on choosing an image to label that maximizes the expected amount of information we gain about the set of unlabeled images. The technique is demonstrated in several contexts, including improving the efficiency of Web image-search queries and open-world visual learning by an autonomous agent. Experiments on a large set of 140 visual object categories taken directly from text-based Web image searches show that our technique can provide large improvements (up to 10 x reduction in the number of training examples needed) over baseline techniques." ] }
1602.07495
1968329810
During recent years, active learning has evolved into a popular paradigm for utilizing user's feedback to improve accuracy of learning algorithms. Active learning works by selecting the most informative sample among unlabeled data and querying the label of that point from user. Many different methods such as uncertainty sampling and minimum risk sampling have been utilized to select the most informative sample in active learning. Although many active learning algorithms have been proposed so far, most of them work with binary or multi-class classification problems and therefore can not be applied to problems in which only samples from one class as well as a set of unlabeled data are available. Such problems arise in many real-world situations and are known as the problem of learning from positive and unlabeled data. In this paper we propose an active learning algorithm that can work when only samples of one class as well as a set of unlabeled data are available. Our method works by separately estimating probability density of positive and unlabeled points and then computing expected value of in formativeness to get rid of a hyper-parameter and have a better measure of in formativeness. Experiments and empirical analysis show promising results compared to other similar methods.
A more recent approach has been proposed in @cite_16 , which tries to apply active learning to the well-known SVDD method. @cite_16 considers likelihood as well as local density of data point to assess their uncertainty. First, the algorithm constructs a neighborhood graph over all data samples. Then, the most informative sample is selected using the following rule:
{ "cite_N": [ "@cite_16" ], "mid": [ "1566292651" ], "abstract": [ "Data domain description techniques aim at deriving concise descriptions of objects belonging to a category of interest. For instance, the support vector domain description (SVDD) learns a hypersphere enclosing the bulk of provided unlabeled data such that points lying outside of the ball are considered anomalous. However, relevant information such as expert and background knowledge remain unused in the unsupervised setting. In this paper, we rephrase data domain description as a semi-supervised learning task, that is, we propose a semi-supervised generalization of data domain description (SSSVDD) to process unlabeled and labeled examples. The corresponding optimization problem is non-convex. We translate it into an unconstraint, continuous problem that can be optimized accurately by gradient-based techniques. Furthermore, we devise an effective active learning strategy to query low-confidence observations. Our empirical evaluation on network intrusion detection and object recognition tasks shows that our SSSVDDs consistently outperform baseline methods in relevant learning settings." ] }
1602.07495
1968329810
During recent years, active learning has evolved into a popular paradigm for utilizing user's feedback to improve accuracy of learning algorithms. Active learning works by selecting the most informative sample among unlabeled data and querying the label of that point from user. Many different methods such as uncertainty sampling and minimum risk sampling have been utilized to select the most informative sample in active learning. Although many active learning algorithms have been proposed so far, most of them work with binary or multi-class classification problems and therefore can not be applied to problems in which only samples from one class as well as a set of unlabeled data are available. Such problems arise in many real-world situations and are known as the problem of learning from positive and unlabeled data. In this paper we propose an active learning algorithm that can work when only samples of one class as well as a set of unlabeled data are available. Our method works by separately estimating probability density of positive and unlabeled points and then computing expected value of in formativeness to get rid of a hyper-parameter and have a better measure of in formativeness. Experiments and empirical analysis show promising results compared to other similar methods.
The main advantage of @cite_16 is that it considers both selection based on uncertainty of data, and exploring unknown regions of the feature space. This fact can be easily inferred from the two terms of equation . However, this methods is biased toward exploring regions containing negative data in the feature space. This causes algorithm to be biased to selecting data which are more likely negative samples. Due to the nature of one-class learning, positive data are much more valuable than negative data samples and therefore selecting negative samples may not be much helpful in improving classification accuracy. Moreover, constructing the neighborhood graph is a time consuming task and makes the algorithm infeasible for real-time applications.
{ "cite_N": [ "@cite_16" ], "mid": [ "1566292651" ], "abstract": [ "Data domain description techniques aim at deriving concise descriptions of objects belonging to a category of interest. For instance, the support vector domain description (SVDD) learns a hypersphere enclosing the bulk of provided unlabeled data such that points lying outside of the ball are considered anomalous. However, relevant information such as expert and background knowledge remain unused in the unsupervised setting. In this paper, we rephrase data domain description as a semi-supervised learning task, that is, we propose a semi-supervised generalization of data domain description (SSSVDD) to process unlabeled and labeled examples. The corresponding optimization problem is non-convex. We translate it into an unconstraint, continuous problem that can be optimized accurately by gradient-based techniques. Furthermore, we devise an effective active learning strategy to query low-confidence observations. Our empirical evaluation on network intrusion detection and object recognition tasks shows that our SSSVDDs consistently outperform baseline methods in relevant learning settings." ] }
1602.07464
2950176152
We propose a simple and efficient method for ranking features in multi-label classification. The method produces a ranking of features showing their relevance in predicting labels, which in turn allows to choose a final subset of features. The procedure is based on Markov Networks and allows to model the dependencies between labels and features in a direct way. In the first step we build a simple network using only labels and then we test how much adding a single feature affects the initial network. More specifically, in the first step we use the Ising model whereas the second step is based on the score statistic, which allows to test a significance of added features very quickly. The proposed approach does not require transformation of label space, gives interpretable results and allows for attractive visualization of dependency structure. We give a theoretical justification of the procedure by discussing some theoretical properties of the Ising model and the score statistic. We also discuss feature ranking procedure based on fitting Ising model using @math regularized logistic regressions. Numerical experiments show that the proposed methods outperform the conventional approaches on the considered artificial and real datasets.
Let us first review the existing FR methods in MLC. The popular approach is to use Binary Relevance (BR) transformation (by considering classification tasks corresponding to separate labels) and to evaluate the relevance of each feature for each of the labels independently ( @cite_23 , @cite_21 ). The scores corresponding to different labels are then combined, which yields the global ranking of features. To evaluate the relevance of features in the tasks, various feature importance measures are used, among which the chi-squared statistic and information gain are the most popular ones ( @cite_28 ). The major drawback of this approach is that possible dependencies between labels are not utilized. The combinations of BR transformation with the chi-squared statistic and information gain will be referred to as and , respectively.
{ "cite_N": [ "@cite_28", "@cite_21", "@cite_23" ], "mid": [ "2093238926", "2053463056", "" ], "abstract": [ "Feature selection is an important task in machine learning, which can effectively reduce the dataset dimensionality by removing irrelevant and or redundant features. Although a large body of research deals with feature selection in single-label data, in which measures have been proposed to filter out irrelevant features, this is not the case for multi-label data. This work proposes multi-label feature selection methods which use the filter approach. To this end, two standard multi-label feature selection approaches, which transform the multi-label data into single-label data, are used. Besides these two problem transformation approaches, we use ReliefF and Information Gain to measure the goodness of features. This gives rise to four multi-label feature selection methods. A thorough experimental evaluation of these methods was carried out on 10 benchmark datasets. Results show that ReliefF is able to select fewer features without diminishing the quality of the classifiers constructed using the features selected.", "This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. Our approach is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance of BoosTexter and a number of other text-categorization algorithms on a variety of tasks. We conclude by describing the application of our system to automatic call-type identification from unconstrained spoken customer responses.", "" ] }
1602.07464
2950176152
We propose a simple and efficient method for ranking features in multi-label classification. The method produces a ranking of features showing their relevance in predicting labels, which in turn allows to choose a final subset of features. The procedure is based on Markov Networks and allows to model the dependencies between labels and features in a direct way. In the first step we build a simple network using only labels and then we test how much adding a single feature affects the initial network. More specifically, in the first step we use the Ising model whereas the second step is based on the score statistic, which allows to test a significance of added features very quickly. The proposed approach does not require transformation of label space, gives interpretable results and allows for attractive visualization of dependency structure. We give a theoretical justification of the procedure by discussing some theoretical properties of the Ising model and the score statistic. We also discuss feature ranking procedure based on fitting Ising model using @math regularized logistic regressions. Numerical experiments show that the proposed methods outperform the conventional approaches on the considered artificial and real datasets.
Finally, let us also discuss other methods used for dimensionality reduction. Wrappers allow to assess subsets of features using some criterion function, e.g. prediction error on validation set. To avoid fitting models on all possible subsets, usually some search strategies are used, e.g. forward selection or backward elimination. The main limitation of wrappers is a significant computational cost, due to training large number of classifiers. Another important group of methods are so-called embedded feature selection procedures, in which the selection of features is an integral element of the learning process. Examples from this group are: multi-label version of decision trees proposed by @cite_16 in which the useful features are chosen during building the tree or methods based on @math regularization .
{ "cite_N": [ "@cite_16" ], "mid": [ "2002156658" ], "abstract": [ "Abstract Why and how the Partial Least Squares Regression (PLSR) was developed, is here described from the author's perspective. The paper outlines my frustrating experiences in the 70'ies with two conflicting and equally over-ambitious and over-simplified modelling cultures - in traditional chemistry and in traditional statistics. It describes my mental progress of first learning to combine them into least squares “unmixing” of known chemical mixtures, and later extending this into the “unscrambling” of partially unknown structures as well. The bi-linear regression framework is summarised in terms of the development from Principal Component Regression into the PLSR. Finally, the versatility of the PLSR is discussed in light of the urgent need for better eduacation in scientific data analysis." ] }
1602.07576
2952054889
We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CIFAR10 and rotated MNIST.
Scattering convolution networks use wavelet convolutions, nonlinearities and group averaging to produce stable invariants @cite_25 . Scattering networks have been extended to use convolutions on the group of translations, rotations and scalings, and have been applied to object and texture recognition @cite_1 @cite_29 .
{ "cite_N": [ "@cite_29", "@cite_1", "@cite_25" ], "mid": [ "", "2167383966", "2072072671" ], "abstract": [ "", "An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification. A joint translation and rotation invariant representation of image patches is calculated with a scattering transform. It is implemented with a deep convolution network, which computes successive wavelet transforms and modulus non-linearities. Invariants to scaling, shearing and small deformations are calculated with linear operators in the scattering domain. State-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions.", "A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier." ] }
1602.07576
2952054889
We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CIFAR10 and rotated MNIST.
show that the AlexNet CNN @cite_5 trained on imagenet spontaneously learns representations that are equivariant to flips, scaling and rotation. This supports the idea that equivariance is a good inductive bias for deep convolutional networks. show that useful representations can be learned in an unsupervised manner by training a convolutional network to be equivariant to ego-motion.
{ "cite_N": [ "@cite_5" ], "mid": [ "2220894487" ], "abstract": [ "We seek to improve deep neural networks by generalizing the pooling operations that play a central role in current architectures. We pursue a careful exploration of approaches to allow pooling to learn and to adapt to complex and variable patterns. The two primary directions lie in (1) learning a pooling function via (two strategies of) combining of max and average pooling, and (2) learning a pooling function in the form of a tree-structured fusion of pooling filters that are themselves learned. In our experiments every generalized pooling operation we explore improves performance when used in place of average or max pooling. We experimentally demonstrate that the proposed pooling operations provide a boost in invariance properties relative to conventional pooling and set the state of the art on several widely adopted benchmark datasets; they are also easy to implement, and can be applied within various deep neural network architectures. These benefits come with only a light increase in computational overhead during training and a very modest increase in the number of model parameters." ] }
1602.07576
2952054889
We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CIFAR10 and rotated MNIST.
proposed an approximately equivariant convolutional architecture that uses sparse, high-dimensional feature maps to deal with high-dimensional groups of transformations. showed that rotation symmetry can be exploited in convolutional networks for the problem of galaxy morphology prediction by rotating feature maps, effectively learning an equivariant representation. This work was later extended @cite_18 and evaluated on various computer vision problems that have cyclic symmetry.
{ "cite_N": [ "@cite_18" ], "mid": [ "2951770173" ], "abstract": [ "Many classes of images exhibit rotational symmetry. Convolutional neural networks are sometimes trained using data augmentation to exploit this, but they are still required to learn the rotation equivariance properties from the data. Encoding these properties into the network architecture, as we are already used to doing for translation equivariance by using convolutional layers, could result in a more efficient use of the parameter budget by relieving the model from learning them. We introduce four operations which can be inserted into neural network models as layers, and which can be combined to make these models partially equivariant to rotations. They also enable parameter sharing across different orientations. We evaluate the effect of these architectural modifications on three datasets which exhibit rotational symmetry and demonstrate improved performance with smaller models." ] }
1602.07576
2952054889
We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CIFAR10 and rotated MNIST.
showed that the concept of disentangling can be understood as a reduction of the operators @math in an equivariant representation, and later related this notion of disentangling to the more familiar statistical notion of decorrelation @cite_34 .
{ "cite_N": [ "@cite_34" ], "mid": [ "1857037962" ], "abstract": [ "When a three-dimensional object moves relative to an observer, a change occurs on the observer's image plane and in the visual representation computed by a learned model. Starting with the idea that a good visual representation is one that transforms linearly under scene motions, we show, using the theory of group representations, that any such representation is equivalent to a combination of the elementary irreducible representations. We derive a striking relationship between irreducibility and the statistical dependency structure of the representation, by showing that under restricted conditions, irreducible representations are decorrelated. Under partial observability, as induced by the perspective projection of a scene onto the image plane, the motion group does not have a linear action on the space of images, so that it becomes necessary to perform inference over a latent representation that does transform linearly. This idea is demonstrated in a model of rotating NORB objects that employs a latent representation of the non-commutative 3D rotation group SO(3)." ] }
1602.07579
2952075674
In traditional cognitive radio networks, secondary users (SUs) typically access the spectrum of primary users (PUs) by a two-stage "listen-before-talk" (LBT) protocol, i.e., SUs sense the spectrum holes in the first stage before transmitting in the second. However, there exist two major problems: 1) transmission time reduction due to sensing, and 2) sensing accuracy impairment due to data transmission. In this paper, we propose a "listen-and-talk" (LAT) protocol with the help of full-duplex (FD) technique that allows SUs to simultaneously sense and access the vacant spectrum. Spectrum utilization performance is carefully analyzed, with the closed-form spectrum waste ratio and collision ratio with the PU provided. Also, regarding the secondary throughput, we report the existence of a tradeoff between the secondary transmit power and throughput. Based on the power-throughput tradeoff, we derive the analytical local optimal transmit power for SUs to achieve both high throughput and satisfying sensing accuracy. Numerical results are given to verify the proposed protocol and the theoretical results.
In @cite_31 , the authors considered multiple SU links with partial complete self-interference suppression capability, and they could operate in either simultaneous transmit-and-sense (TS) or simultaneous transmit-and-receive (TR) modes. Mode selection between the TS and TR mode and the coordination of SU links were proposed to achieve high secondary throughput. The idea of the TS mode is similar to our protocol. However, in @cite_31 , one fixed threshold for energy detection was used in both SO and TS modes, while in our work, a pair of sensing thresholds are designed to compensate for the imperfect self-interference cancelation. Besides, the authors in @cite_31 only provided calculation of error sensing probabilities in series expressions in the analysis, and failed to present how well can the SUs utilize the spectrum holes, which is addressed in our work.
{ "cite_N": [ "@cite_31" ], "mid": [ "2145514111" ], "abstract": [ "We present a novel approach on the calculation of the moment generating function of mutual information, of MIMO channels with correlated Rayleigh fading. For the first time, a concise mathematical formulation of the moment generating function is given in terms of a hypergeometric function of matrix arguments. In contrast to existing literature, our approach is not based on eigenvalue probability density functions but uses a direct integration technique. In principle, via the moment generating function it is possible to calculate exact, i.e. non-asymptotic moments, including e.g. ergodic capacity, for arbitrary array sizes and arbitrary correlation properties at receiver as well as transmitter, thus unifying and completing existing partial solutions for special propagation scenarios. Monte-Carlo simulations of ergodic capacity verify the accuracy of the analysis." ] }
1602.07579
2952075674
In traditional cognitive radio networks, secondary users (SUs) typically access the spectrum of primary users (PUs) by a two-stage "listen-before-talk" (LBT) protocol, i.e., SUs sense the spectrum holes in the first stage before transmitting in the second. However, there exist two major problems: 1) transmission time reduction due to sensing, and 2) sensing accuracy impairment due to data transmission. In this paper, we propose a "listen-and-talk" (LAT) protocol with the help of full-duplex (FD) technique that allows SUs to simultaneously sense and access the vacant spectrum. Spectrum utilization performance is carefully analyzed, with the closed-form spectrum waste ratio and collision ratio with the PU provided. Also, regarding the secondary throughput, we report the existence of a tradeoff between the secondary transmit power and throughput. Based on the power-throughput tradeoff, we derive the analytical local optimal transmit power for SUs to achieve both high throughput and satisfying sensing accuracy. Numerical results are given to verify the proposed protocol and the theoretical results.
The authors in @cite_38 considered cooperation between primary and secondary systems. In their model, the cognitive base station (CBS) relays the primary signal, and in return it can transmit its own cognitive signal. The CBS was assumed to be FD enabled with multiple antennas. Beamforming technique was used to differentiate the forwarding signal for primary users and secondary transmission.
{ "cite_N": [ "@cite_38" ], "mid": [ "2147868657" ], "abstract": [ "This paper studies the cooperation between a primary system and a cognitive system in a cellular network where the cognitive base station (CBS) relays the primary signal using amplify-and-forward or decode-and-forward protocols, and in return it can transmit its own cognitive signal. While the commonly used half-duplex (HD) assumption may render the cooperation less efficient due to the two orthogonal channel phases employed, we propose that the CBS can work in a full-duplex (FD) mode to improve the system rate region. The problem of interest is to find the achievable primary-cognitive rate region by studying the cognitive rate maximization problem. For both modes, we explicitly consider the CBS transmit imperfections, which lead to the residual self-interference associated with the FD operation mode. We propose closed-form solutions or efficient algorithms to solve the problem when the related residual interference power is non-scalable or scalable with the transmit power. Furthermore, we propose a simple hybrid scheme to select the HD or FD mode based on zero-forcing criterion, and provide insights on the impact of system parameters. Numerical results illustrate significant performance improvement by using the FD mode and the hybrid scheme." ] }
1602.06979
2273847690
Human language is colored by a broad range of topics, but existing text analysis tools only focus on a small number of them. We present Empath, a tool that can generate and validate new lexical categories on demand from a small set of seed terms (like "bleed" and "punch" to generate the category violence). Empath draws connotations between words and phrases by deep learning a neural embedding across more than 1.8 billion words of modern fiction. Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter. Empath also analyzes text across 200 built-in, pre-validated categories we have generated from common topics in our web dataset, like neglect, government, and social media. We show that Empath's data-driven, human validated categories are highly correlated (r=0.906) with similar categories in LIWC.
Work in sentiment analysis, in combination with deep learning, has developed powerful techniques to classify text across positive and negative polarity @cite_21 , but has also benefited from simpler, transparent models and rules @cite_5 . Empath draws on the complementary strengths of these ideas, using the power of unsupervised deep learning to create feature sets for the analysis of text. One of Empath's goals is to embed modern NLP techniques in a way that offers the transparency of dictionaries like LIWC.
{ "cite_N": [ "@cite_5", "@cite_21" ], "mid": [ "2099813784", "2251939518" ], "abstract": [ "The inherent nature of social media content poses serious challenges to practical applications of sentiment analysis. We present VADER, a simple rule-based model for general sentiment analysis, and compare its effectiveness to eleven typical state-of-practice benchmarks including LIWC, ANEW, the General Inquirer, SentiWordNet, and machine learning oriented techniques relying on Naive Bayes, Maximum Entropy, and Support Vector Machine (SVM) algorithms. Using a combination of qualitative and quantitative methods, we first construct and empirically validate a goldstandard list of lexical features (along with their associated sentiment intensity measures) which are specifically attuned to sentiment in microblog-like contexts. We then combine these lexical features with consideration for five general rules that embody grammatical and syntactical conventions for expressing and emphasizing sentiment intensity. Interestingly, using our parsimonious rule-based model to assess the sentiment of tweets, we find that VADER outperforms individual human raters (F1 Classification Accuracy = 0.96 and 0.84, respectively), and generalizes more favorably across contexts than any of our benchmarks.", "Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases." ] }
1602.06979
2273847690
Human language is colored by a broad range of topics, but existing text analysis tools only focus on a small number of them. We present Empath, a tool that can generate and validate new lexical categories on demand from a small set of seed terms (like "bleed" and "punch" to generate the category violence). Empath draws connotations between words and phrases by deep learning a neural embedding across more than 1.8 billion words of modern fiction. Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter. Empath also analyzes text across 200 built-in, pre-validated categories we have generated from common topics in our web dataset, like neglect, government, and social media. We show that Empath's data-driven, human validated categories are highly correlated (r=0.906) with similar categories in LIWC.
As social computing and computational social science researchers have gained access to large textual datasets, they have increasingly adopted analyses that cover a wide range of textual signal. For example, researchers have investigated the public's response to major holidays and news events @cite_30 , how conversational partners mirror each others @cite_20 , the topical and emotional content of blogs @cite_4 @cite_42 @cite_24 , and whether one person's writing may influence her friends when she posts to social media like Facebook @cite_38 or Twitter @cite_0 . Each of these analyses builds a model of the categories that represent their constructs of interest, or uses a word-category dictionary such as LIWC. Through Empath, we aim to empower researchers with the ability to generate and validate these categories.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_4", "@cite_42", "@cite_24", "@cite_0", "@cite_20" ], "mid": [ "2949965121", "1968380849", "2040467972", "", "", "2263837627", "2407706885" ], "abstract": [ "Microblogging is a form of online communication by which users broadcast brief text updates, also known as tweets, to the public or a selected circle of contacts. A variegated mosaic of microblogging uses has emerged since the launch of Twitter in 2006: daily chatter, conversation, information sharing, and news commentary, among others. Regardless of their content and intended use, tweets often convey pertinent information about their author's mood status. As such, tweets can be regarded as temporally-authentic microscopic instantiations of public mood state. In this article, we perform a sentiment analysis of all public tweets broadcasted by Twitter users between August 1 and December 20, 2008. For every day in the timeline, we extract six dimensions of mood (tension, depression, anger, vigor, fatigue, confusion) using an extended version of the Profile of Mood States (POMS), a well-established psychometric instrument. We compare our results to fluctuations recorded by stock market and crude oil price indices and major events in media and popular culture, such as the U.S. Presidential Election of November 4, 2008 and Thanksgiving Day. We find that events in the social, political, cultural and economic sphere do have a significant, immediate and highly specific effect on the various dimensions of public mood. We speculate that large scale analyses of mood can provide a solid platform to model collective emotive trends in terms of their predictive value with regards to existing social as well as economic indicators.", "Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337:a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.", "Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper, we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help to identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help to obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotion-annotation questions, and show that asking if a term is associated with an emotion leads to markedly higher interannotator agreement than that obtained by asking if a term evokes an emotion.", "", "", "We study the propagation of expression of moods from one individual to another in social media. Specifically, we utilize an ensemble of more than 200 moods, and follow their trails of retweets on Twitter. We examine the diffusion of these moods through the characteristics of the author of the retweeted post, of the post, and its linguistic style and topical content. We observe that moods of high valence and low to moderate activation propagate the most. Our findings also indicate that mood-bearing posts from individuals of medium #followers show highest diffusion, instead of elite users. Further, moods in postings with high self-attentional focus, posted by socially interactive women, and with fewer links diffuse the most. Finally, we leverage these characteristics to build a linear regression model that can predict extent of diffusion in a mood-bearing post. The model yields accuracy of 75 showing that user's attributes as well as post's language characteristics are key factors driving the diffusion of moods in social media.", "We present a computational framework for understanding the social aspects of emotions in Twitter conversations. Using unannotated data and semisupervised machine learning, we look at emotional transitions, emotional influences among the conversation partners, and patterns in the overall emotional exchanges. We find that conversational partners usually express the same emotion, which we name Emotion accommodation, but when they do not, one of the conversational partners tends to respond with a positive emotion. We also show that tweets containing sympathy, apology, and complaint are significant emotion influencers. We verify the emotion classification part of our framework by a human-annotated corpus." ] }
1602.06979
2273847690
Human language is colored by a broad range of topics, but existing text analysis tools only focus on a small number of them. We present Empath, a tool that can generate and validate new lexical categories on demand from a small set of seed terms (like "bleed" and "punch" to generate the category violence). Empath draws connotations between words and phrases by deep learning a neural embedding across more than 1.8 billion words of modern fiction. Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter. Empath also analyzes text across 200 built-in, pre-validated categories we have generated from common topics in our web dataset, like neglect, government, and social media. We show that Empath's data-driven, human validated categories are highly correlated (r=0.906) with similar categories in LIWC.
Other work in human-computer interaction has relied upon text analysis tools to build new interactive systems. For example, researchers have automatically generated audio transitions for interviews, cued by signals of mood in the transcripts @cite_19 , dynamically generated soundtracks for novels using an emotional lexicon @cite_28 , or mapped ambiguous natural language onto its visual meaning @cite_3 . Empath's ability to generate lexical categories on demand potentially enables new interactive systems, cued on nuanced emotional signals like , or diverse topics that fit the new domain.
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_3" ], "mid": [ "1814626764", "2121277371", "2274505579" ], "abstract": [ "We present a system, TransProse, that automatically generates musical pieces from text. TransProse uses known relations between elements of music such as tempo and scale, and the emotions they evoke. Further, it uses a novel mechanism to determine sequences of notes that capture the emotional activity in text. The work has applications in information visualization, in creating audio-visual e-books, and in developing music apps.", "Highly-produced audio stories often include musical scores that reflect the emotions of the speech. Yet, creating effective musical scores requires deep expertise in sound production and is time-consuming even for experts. We present a system and algorithm for re-sequencing music tracks to generate emotionally relevant music scores for audio stories. The user provides a speech track and music tracks and our system gathers emotion labels on the speech through hand-labeling, crowdsourcing, and automatic methods. We develop a constraint-based dynamic programming algorithm that uses these emotion labels to generate emotionally relevant musical scores. We demonstrate the effectiveness of our algorithm by generating 20 musical scores for audio stories and showing that crowd workers rank their overall quality significantly higher than stories without music.", "Answering questions with data is a difficult and time-consuming process. Visual dashboards and templates make it easy to get started, but asking more sophisticated questions often requires learning a tool designed for expert analysts. Natural language interaction allows users to ask questions directly in complex programs without having to learn how to use an interface. However, natural language is often ambiguous. In this work we propose a mixed-initiative approach to managing ambiguity in natural language interfaces for data visualization. We model ambiguity throughout the process of turning a natural language query into a visualization and use algorithmic disambiguation coupled with interactive ambiguity widgets. These widgets allow the user to resolve ambiguities by surfacing system decisions at the point where the ambiguity matters. Corrections are stored as constraints and influence subsequent queries. We have implemented these ideas in a system, DataTone. In a comparative study, we find that DataTone is easy to learn and lets users ask questions without worrying about syntax and proper question form." ] }
1602.06979
2273847690
Human language is colored by a broad range of topics, but existing text analysis tools only focus on a small number of them. We present Empath, a tool that can generate and validate new lexical categories on demand from a small set of seed terms (like "bleed" and "punch" to generate the category violence). Empath draws connotations between words and phrases by deep learning a neural embedding across more than 1.8 billion words of modern fiction. Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter. Empath also analyzes text across 200 built-in, pre-validated categories we have generated from common topics in our web dataset, like neglect, government, and social media. We show that Empath's data-driven, human validated categories are highly correlated (r=0.906) with similar categories in LIWC.
A large body of prior work has investigated unsupervised language modeling. For example, researchers have learned sentiment models from the relationships between words @cite_46 , classified the polarity of reviews in an unsupervised fashion @cite_6 , discovered patterns of narrative in text @cite_39 , and (more recently) used neural networks to model word meanings in a vector space @cite_26 @cite_32 . We borrow from the last of these approaches in constructing of Empath's unsupervised model.
{ "cite_N": [ "@cite_26", "@cite_32", "@cite_6", "@cite_39", "@cite_46" ], "mid": [ "2153579005", "", "2155328222", "2158794898", "2199803028" ], "abstract": [ "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "", "This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g., \"subtle nuances\") and a negative semantic orientation when it has bad associations (e.g., \"very cavalier\"). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word \"excellent\" minus the mutual information between the given phrase and the word \"poor\". A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74 when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84 for automobile reviews to 66 for movie reviews.", "We describe an unsupervised system for learning narrative schemas, coherent sequences or sets of events (arrested(POLICE, SUSPECT), convicted(JUDGE, SUSPECT)) whose arguments are filled with participant semantic roles defined over words (Judge = judge, jury, court , Police = police, agent, authorities ). Unlike most previous work in event structure or semantic role learning, our system does not use supervised techniques, hand-built knowledge, or predefined classes of events or roles. Our unsupervised learning algorithm uses coreferring arguments in chains of verbs to learn both rich narrative event structure and argument roles. By jointly addressing both tasks, we improve on previous results in narrative frame learning and induce rich frame-specific semantic roles.", "We identify and validate from a large corpus constraints from conjunctions on the positive or negative semantic orientation of the conjoined adjectives. A log-linear regression model uses these constraints to predict whether conjoined adjectives are of same or different orientations, achieving 82 accuracy in this task when each conjunction is considered independently. Combining the constraints across many adjectives, a clustering algorithm separates the adjectives into groups of different orientations, and finally, adjectives are labeled positive or negative. Evaluations on real data and simulation experiments indicate high levels of performance: classification precision is more than 90 for adjectives that occur in a modest number of conjunctions in the corpus." ] }
1602.06979
2273847690
Human language is colored by a broad range of topics, but existing text analysis tools only focus on a small number of them. We present Empath, a tool that can generate and validate new lexical categories on demand from a small set of seed terms (like "bleed" and "punch" to generate the category violence). Empath draws connotations between words and phrases by deep learning a neural embedding across more than 1.8 billion words of modern fiction. Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter. Empath also analyzes text across 200 built-in, pre-validated categories we have generated from common topics in our web dataset, like neglect, government, and social media. We show that Empath's data-driven, human validated categories are highly correlated (r=0.906) with similar categories in LIWC.
Empath also takes inspiration from techniques for mining human patterns from data. Augur likewise mines fiction, but it does so to learn human activities for interactive systems @cite_12 . Augur's evaluation indicated that with regard to low-level behaviors such as actions, fiction provides a surprisingly accurate mirror of human behavior. Empath contributes a different perspective, that fiction can be an appropriate tool for learning a breadth of topical and emotional categories, to the benefit of social science. In other research communities, systems have used unsupervised models to capture emergent practice in open source code @cite_27 or design @cite_40 . In Empath, we adapt these techniques to mine natural language for its relation to emotional and topical categories.
{ "cite_N": [ "@cite_40", "@cite_27", "@cite_12" ], "mid": [ "2007644286", "2125990861", "2281258985" ], "abstract": [ "Advances in data mining and knowledge discovery have transformed the way Web sites are designed. However, while visual presentation is an intrinsic part of the Web, traditional data mining techniques ignore render-time page structures and their attributes. This paper introduces design mining for the Web: using knowledge discovery techniques to understand design demographics, automate design curation, and support data-driven design tools. This idea is manifest in Webzeitgeist, a platform for large-scale design mining comprising a repository of over 100,000 Web pages and 100 million design elements. This paper describes the principles driving design mining, the implementation of the Webzeitgeist architecture, and the new class of data-driven design applications it enables.", "While emergent behaviors are uncodified across many domains such as programming and writing, interfaces need explicit rules to support users. We hypothesize that by codifying emergent programming behavior, software engineering interfaces can support a far broader set of developer needs. To explore this idea, we built Codex, a knowledge base that records common practice for the Ruby programming language by indexing over three million lines of popular code. Codex enables new data-driven interfaces for programming systems: statistical linting, identifying code that is unlikely to occur in practice and may constitute a bug; pattern annotation, automatically discovering common programming idioms and annotating them with metadata using expert crowdsourcing; and library generation, constructing a utility package that encapsulates and reflects emergent software practice. We evaluate these applications to find Codex captures a broad swatch of programming practice, statistical linting detects problematic code snippets, and pattern annotation discovers nontrivial idioms such as basic HTTP authentication and database migration templates. Our work suggests that operationalizing practice-driven knowledge in structured domains such as programming can enable a new class of user interfaces.", "From smart homes that prepare coffee when we wake, to phones that know not to interrupt us during important conversations, our collective visions of HCI imagine a future in which computers understand a broad range of human behaviors. Today our systems fall short of these visions, however, because this range of behaviors is too large for designers or programmers to capture manually. In this paper, we instead demonstrate it is possible to mine a broad knowledge base of human behavior by analyzing more than one billion words of modern fiction. Our resulting knowledge base, Augur, trains vector models that can predict many thousands of user activities from surrounding objects in modern contexts: for example, whether a user may be eating food, meeting with a friend, or taking a selfie. Augur uses these predictions to identify actions that people commonly take on objects in the world and estimate a user's future activities given their current situation. We demonstrate Augur-powered, activity-based systems such as a phone that silences itself when the odds of you answering it are low, and a dynamic music player that adjusts to your present activity. A field deployment of an Augur-powered wearable camera resulted in 96 recall and 71 precision on its unsupervised predictions of common daily activities. A second evaluation where human judges rated the system's predictions over a broad set of input images found that 94 were rated sensible." ] }
1602.06697
2287105829
Hashing is widely applied to approximate nearest neighbor search for large-scale multimodal retrieval with storage and computation efficiency. Cross-modal hashing improves the quality of hash coding by exploiting semantic correlations across different modalities. Existing cross-modal hashing methods first transform data into low-dimensional feature vectors, and then generate binary codes by another separate quantization step. However, suboptimal hash codes may be generated since the quantization error is not explicitly minimized and the feature representation is not jointly optimized with the binary codes. This paper presents a Correlation Hashing Network (CHN) approach to cross-modal hashing, which jointly learns good data representation tailored to hash coding and formally controls the quantization error. The proposed CHN is a hybrid deep architecture that constitutes a convolutional neural network for learning good image representations, a multilayer perception for learning good text representations, two hashing layers for generating compact binary codes, and a structured max-margin loss that integrates all things together to enable learning similarity-preserving and high-quality hash codes. Extensive empirical study shows that CHN yields state of the art cross-modal retrieval performance on standard benchmarks.
Cross-modal hashing has been an increasingly popular research topic in machine learning, computer vision, and multimedia retrieval @cite_5 @cite_34 @cite_6 @cite_16 @cite_31 @cite_24 @cite_21 @cite_9 @cite_25 @cite_8 @cite_0 . We refer readers to @cite_35 for a comprehensive survey.
{ "cite_N": [ "@cite_35", "@cite_8", "@cite_9", "@cite_21", "@cite_6", "@cite_16", "@cite_24", "@cite_0", "@cite_5", "@cite_31", "@cite_34", "@cite_25" ], "mid": [ "1870428314", "2203543769", "1964073652", "1996219872", "2171626009", "2064797228", "2251084241", "", "1970055505", "2049993534", "199018803", "1979644923" ], "abstract": [ "Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space.", "Due to its low storage cost and fast query speed, hashing has been widely adopted for similarity search in multimedia data. In particular, more and more attentions have been payed to multimodal hashing for search in multimedia data with multiple modalities, such as images with tags. Typically, supervised information of semantic labels is also available for the data points in many real applications. Hence, many supervised multimodal hashing (SMH) methods have been proposed to utilize such semantic labels to further improve the search accuracy. However, the training time complexity of most existing SMH methods is too high, which makes them unscalable to large-scale datasets. In this paper, a novel SMH method, called semantic correlation maximization (SCM), is proposed to seamlessly integrate semantic labels into the hashing learning procedure for large-scale data modeling. Experimental results on two real-world datasets show that SCM can significantly outperform the state-of-the-art SMH methods, in terms of both accuracy and scalability.", "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter @math is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.", "Cross-media hashing, which conducts cross-media retrieval by embedding data from different modalities into a common low-dimensional Hamming space, has attracted intensive attention in recent years. The existing cross-media hashing approaches only aim at learning hash functions to preserve the intra-modality and inter-modality correlations, but do not directly capture the underlying semantic information of the multi-modal data. We propose a discriminative coupled dictionary hashing (DCDH) method in this paper. In DCDH, the coupled dictionary for each modality is learned with side information (e.g., categories). As a result, the coupled dictionaries not only preserve the intra-similarity and inter-correlation among multi-modal data, but also contain dictionary atoms that are semantically discriminative (i.e., the data from the same category is reconstructed by the similar dictionary atoms). To perform fast cross-media retrieval, we learn hash functions which map data from the dictionary space to a low-dimensional Hamming space. Besides, we conjecture that a balanced representation is crucial in cross-media retrieval. We introduce multi-view features on the relatively weak'' modalities into DCDH and extend it to multi-view DCDH (MV-DCDH) in order to enhance their representation capability. The experiments on two real-world data sets show that our DCDH and MV-DCDH outperform the state-of-the-art methods significantly on cross-media retrieval.", "Hashing-based methods provide a very promising approach to large-scale similarity search. To obtain compact hash codes, a recent trend seeks to learn the hash functions from data automatically. In this paper, we study hash function learning in the context of multimodal data. We propose a novel multimodal hash function learning method, called Co-Regularized Hashing (CRH), based on a boosted co-regularization framework. The hash functions for each bit of the hash codes are learned by solving DC (difference of convex functions) programs, while the learning for multiple bits proceeds via a boosting procedure so that the bias introduced by the hash functions can be sequentially minimized. We empirically compare CRH with two state-of-the-art multimodal hash function learning methods on two publicly available data sets.", "In recent years, both hashing-based similarity search and multimodal similarity search have aroused much research interest in the data mining and other communities. While hashing-based similarity search seeks to address the scalability issue, multimodal similarity search deals with applications in which data of multiple modalities are available. In this paper, our goal is to address both issues simultaneously. We propose a probabilistic model, called multimodal latent binary embedding (MLBE), to learn hash functions from multimodal data automatically. MLBE regards the binary latent factors as hash codes in a common Hamming space. Given data from multiple modalities, we devise an efficient algorithm for the learning of binary latent factors which corresponds to hash function learning. Experimental validation of MLBE has been conducted using both synthetic data and two realistic data sets. Experimental results show that MLBE compares favorably with two state-of-the-art models.", "Multi-modal retrieval is emerging as a new search paradigm that enables seamless information retrieval from various types of media. For example, users can simply snap a movie poster to search relevant reviews and trailers. To solve the problem, a set of mapping functions are learned to project high-dimensional features extracted from data of different media types into a common low-dimensional space so that metric distance measures can be applied. In this paper, we propose an effective mapping mechanism based on deep learning (i.e., stacked auto-encoders) for multi-modal retrieval. Mapping functions are learned by optimizing a new objective function, which captures both intra-modal and inter-modal semantic relationships of data from heterogeneous sources effectively. Compared with previous works which require a substantial amount of prior knowledge such as similarity matrices of intra-modal data and ranking examples, our method requires little prior knowledge. Given a large training dataset, we split it into mini-batches and continually adjust the mapping functions for each batch of input. Hence, our method is memory efficient with respect to the data volume. Experiments on three real datasets illustrate that our proposed method achieves significant improvement in search accuracy over the state-of-the-art methods.", "", "Visual understanding is often based on measuring similarity between observations. Learning similarities specific to a certain perception task from a set of examples has been shown advantageous in various computer vision and pattern recognition problems. In many important applications, the data that one needs to compare come from different representations or modalities, and the similarity between such data operates on objects that may have different and often incommensurable structure and dimensionality. In this paper, we propose a framework for supervised similarity learning based on embedding the input data from two arbitrary spaces into the Hamming space. The mapping is expressed as a binary classification problem with positive and negative examples, and can be efficiently learned using boosting algorithms. The utility and efficiency of such a generic approach is demonstrated on several challenging applications including cross-representation shape retrieval and alignment of multi-modal medical images.", "In this paper, we present a new multimedia retrieval paradigm to innovate large-scale search of heterogenous multimedia data. It is able to return results of different media types from heterogeneous data sources, e.g., using a query image to retrieve relevant text documents or images from different data sources. This utilizes the widely available data from different sources and caters for the current users' demand of receiving a result list simultaneously containing multiple types of data to obtain a comprehensive understanding of the query's results. To enable large-scale inter-media retrieval, we propose a novel inter-media hashing (IMH) model to explore the correlations among multiple media types from different data sources and tackle the scalability issue. To this end, multimedia data from heterogeneous data sources are transformed into a common Hamming space, in which fast search can be easily implemented by XOR and bit-count operations. Furthermore, we integrate a linear regression model to learn hashing functions so that the hash codes for new data points can be efficiently generated. Experiments conducted on real-world large-scale multimedia datasets demonstrate the superiority of our proposed method compared with state-of-the-art techniques.", "Many applications in Multilingual and Multimodal Information Access involve searching large databases of high dimensional data objects with multiple (conditionally independent) views. In this work we consider the problem of learning hash functions for similarity search across the views for such applications. We propose a principled method for learning a hash function for each view given a set of multiview training data objects. The hash functions map similar objects to similar codes across the views thus enabling cross-view similarity search. We present results from an extensive empirical study of the proposed approach which demonstrate its effectiveness on Japanese language People Search and Multilingual People Search problems.", "Cross media retrieval engines have gained massive popularity with rapid development of the Internet. Users may perform queries in a corpus consisting of audio, video, and textual information. To make such systems practically possible for large mount of multimedia data, two critical issues must be carefully considered: (a) reduce the storage as much as possible; (b) model the relationship of the heterogeneous media data. Recently academic community have proved that encoding the data into compact binary codes can drastically reduce the storage and computational cost. However, it is still unclear how to integrate multiple information sources properly into the binary code encoding scheme. In this paper, we study the cross media indexing problem by learning the discriminative hashing functions to map the multi-view datum into a shared hamming space. Not only meaningful within-view similarity is required to be preserved, we also incorporate the between-view correlations into the encoding scheme, where we map the similar points close together and push apart the dissimilar ones. To this end, we propose a novel hashing algorithm called Iterative Multi-View Hashing (IMVH) by taking these information into account simultaneously. To solve this joint optimization problem efficiently, we further develop an iterative scheme to deal with it by using a more flexible quantization model. In particular, an optimal alignment is learned to maintain the between-view similarity in the encoding scheme. And the binary codes are obtained by directly solving a series of binary label assignment problems without continuous relaxation to avoid the unnecessary quantization loss. In this way, the proposed algorithm not only greatly improves the retrieval accuracy but also performs strong robustness. An extensive set of experiments clearly demonstrates the superior performance of the proposed method against the state-of-the-art techniques on both multimodal and unimodal retrieval tasks." ] }
1602.06697
2287105829
Hashing is widely applied to approximate nearest neighbor search for large-scale multimodal retrieval with storage and computation efficiency. Cross-modal hashing improves the quality of hash coding by exploiting semantic correlations across different modalities. Existing cross-modal hashing methods first transform data into low-dimensional feature vectors, and then generate binary codes by another separate quantization step. However, suboptimal hash codes may be generated since the quantization error is not explicitly minimized and the feature representation is not jointly optimized with the binary codes. This paper presents a Correlation Hashing Network (CHN) approach to cross-modal hashing, which jointly learns good data representation tailored to hash coding and formally controls the quantization error. The proposed CHN is a hybrid deep architecture that constitutes a convolutional neural network for learning good image representations, a multilayer perception for learning good text representations, two hashing layers for generating compact binary codes, and a structured max-margin loss that integrates all things together to enable learning similarity-preserving and high-quality hash codes. Extensive empirical study shows that CHN yields state of the art cross-modal retrieval performance on standard benchmarks.
Existing cross-modal hashing methods can be roughly categorized into unsupervised methods and supervised methods. IMH @cite_31 and CVH @cite_34 are unsupervised methods, which extend spectral hashing @cite_33 to the multimodal scenarios. CMSSH @cite_5 , SCM @cite_8 and QCH @cite_19 are supervised methods, which require that if two points are known to be similar, then their corresponding hash codes from different modalities should be similar. Since supervised methods can explore the semantic labels to enhance the cross-modal correlations and reduce the semantic gap @cite_13 , they can achieve superior accuracy than unsupervised methods for cross-modal similarity search with shorter hash codes.
{ "cite_N": [ "@cite_33", "@cite_8", "@cite_19", "@cite_5", "@cite_31", "@cite_34", "@cite_13" ], "mid": [ "", "2203543769", "2267050401", "1970055505", "2049993534", "199018803", "2130660124" ], "abstract": [ "", "Due to its low storage cost and fast query speed, hashing has been widely adopted for similarity search in multimedia data. In particular, more and more attentions have been payed to multimodal hashing for search in multimedia data with multiple modalities, such as images with tags. Typically, supervised information of semantic labels is also available for the data points in many real applications. Hence, many supervised multimodal hashing (SMH) methods have been proposed to utilize such semantic labels to further improve the search accuracy. However, the training time complexity of most existing SMH methods is too high, which makes them unscalable to large-scale datasets. In this paper, a novel SMH method, called semantic correlation maximization (SCM), is proposed to seamlessly integrate semantic labels into the hashing learning procedure for large-scale data modeling. Experimental results on two real-world datasets show that SCM can significantly outperform the state-of-the-art SMH methods, in terms of both accuracy and scalability.", "Cross-modal hashing is designed to facilitate fast search across domains. In this work, we present a cross-modal hashing approach, called quantized correlation hashing (QCH), which takes into consideration the quantization loss over domains and the relation between domains. Unlike previous approaches that separate the optimization of the quantizer independent of maximization of domain correlation, our approach simultaneously optimizes both processes. The underlying relation between the domains that describes the same objects is established via maximizing the correlation between the hash codes across the domains. The resulting multi-modal objective function is transformed to a unimodal formalization, which is optimized through an alternative procedure. Experimental results on three real world datasets demonstrate that our approach outperforms the state-of-the-art multi-modal hashing methods.", "Visual understanding is often based on measuring similarity between observations. Learning similarities specific to a certain perception task from a set of examples has been shown advantageous in various computer vision and pattern recognition problems. In many important applications, the data that one needs to compare come from different representations or modalities, and the similarity between such data operates on objects that may have different and often incommensurable structure and dimensionality. In this paper, we propose a framework for supervised similarity learning based on embedding the input data from two arbitrary spaces into the Hamming space. The mapping is expressed as a binary classification problem with positive and negative examples, and can be efficiently learned using boosting algorithms. The utility and efficiency of such a generic approach is demonstrated on several challenging applications including cross-representation shape retrieval and alignment of multi-modal medical images.", "In this paper, we present a new multimedia retrieval paradigm to innovate large-scale search of heterogenous multimedia data. It is able to return results of different media types from heterogeneous data sources, e.g., using a query image to retrieve relevant text documents or images from different data sources. This utilizes the widely available data from different sources and caters for the current users' demand of receiving a result list simultaneously containing multiple types of data to obtain a comprehensive understanding of the query's results. To enable large-scale inter-media retrieval, we propose a novel inter-media hashing (IMH) model to explore the correlations among multiple media types from different data sources and tackle the scalability issue. To this end, multimedia data from heterogeneous data sources are transformed into a common Hamming space, in which fast search can be easily implemented by XOR and bit-count operations. Furthermore, we integrate a linear regression model to learn hashing functions so that the hash codes for new data points can be efficiently generated. Experiments conducted on real-world large-scale multimedia datasets demonstrate the superiority of our proposed method compared with state-of-the-art techniques.", "Many applications in Multilingual and Multimodal Information Access involve searching large databases of high dimensional data objects with multiple (conditionally independent) views. In this work we consider the problem of learning hash functions for similarity search across the views for such applications. We propose a principled method for learning a hash function for each view given a set of multiview training data objects. The hash functions map similar objects to similar codes across the views thus enabling cross-view similarity search. We present results from an extensive empirical study of the proposed approach which demonstrate its effectiveness on Japanese language People Search and Multilingual People Search problems.", "Presents a review of 200 references in content-based image retrieval. The paper starts with discussing the working conditions of content-based retrieval: patterns of use, types of pictures, the role of semantics, and the sensory gap. Subsequent sections discuss computational steps for image retrieval systems. Step one of the review is image processing for retrieval sorted by color, texture, and local geometry. Features for retrieval are discussed next, sorted by: accumulative and global features, salient points, object and shape features, signs, and structural combinations thereof. Similarity of pictures and objects in pictures is reviewed for each of the feature types, in close connection to the types and means of feedback the user of the systems is capable of giving by interaction. We briefly discuss aspects of system engineering: databases, system architecture, and evaluation. In the concluding section, we present our view on: the driving force of the field, the heritage from computer vision, the influence on computer vision, the role of similarity and of interaction, the need for databases, the problem of evaluation, and the role of the semantic gap." ] }
1602.06697
2287105829
Hashing is widely applied to approximate nearest neighbor search for large-scale multimodal retrieval with storage and computation efficiency. Cross-modal hashing improves the quality of hash coding by exploiting semantic correlations across different modalities. Existing cross-modal hashing methods first transform data into low-dimensional feature vectors, and then generate binary codes by another separate quantization step. However, suboptimal hash codes may be generated since the quantization error is not explicitly minimized and the feature representation is not jointly optimized with the binary codes. This paper presents a Correlation Hashing Network (CHN) approach to cross-modal hashing, which jointly learns good data representation tailored to hash coding and formally controls the quantization error. The proposed CHN is a hybrid deep architecture that constitutes a convolutional neural network for learning good image representations, a multilayer perception for learning good text representations, two hashing layers for generating compact binary codes, and a structured max-margin loss that integrates all things together to enable learning similarity-preserving and high-quality hash codes. Extensive empirical study shows that CHN yields state of the art cross-modal retrieval performance on standard benchmarks.
Most of previous cross-modal hashing methods based on shallow architectures cannot exploit correlation across different modalities. And the latest cross-modal deep models @cite_27 @cite_2 @cite_24 @cite_9 have shown that deep models can capture nonlinear cross-modal correlations more effectively. However, it remains unclear how to maximize cross-modal correlation and control quantization error in a hybrid deep architecture using well-specified loss functions.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_27", "@cite_2" ], "mid": [ "2251084241", "1964073652", "2087193308", "154472438" ], "abstract": [ "Multi-modal retrieval is emerging as a new search paradigm that enables seamless information retrieval from various types of media. For example, users can simply snap a movie poster to search relevant reviews and trailers. To solve the problem, a set of mapping functions are learned to project high-dimensional features extracted from data of different media types into a common low-dimensional space so that metric distance measures can be applied. In this paper, we propose an effective mapping mechanism based on deep learning (i.e., stacked auto-encoders) for multi-modal retrieval. Mapping functions are learned by optimizing a new objective function, which captures both intra-modal and inter-modal semantic relationships of data from heterogeneous sources effectively. Compared with previous works which require a substantial amount of prior knowledge such as similarity matrices of intra-modal data and ranking examples, our method requires little prior knowledge. Given a large training dataset, we split it into mini-batches and continually adjust the mapping functions for each batch of input. Hence, our method is memory efficient with respect to the data volume. Experiments on three real datasets illustrate that our proposed method achieves significant improvement in search accuracy over the state-of-the-art methods.", "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter @math is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.", "We introduce an efficient computational framework for hashing data belonging to multiple modalities into a single representation space where they become mutually comparable. The proposed approach is based on a novel coupled siamese neural network architecture and allows unified treatment of intra- and inter-modality similarity learning. Unlike existing cross-modality similarity learning approaches, our hashing functions are not limited to binarized linear projections and can assume arbitrarily complex forms. We show experimentally that our method significantly outperforms state-of-the-art hashing approaches on multimedia retrieval tasks.", "Data often consists of multiple diverse modalities. For example, images are tagged with textual information and videos are accompanied by audio. Each modality is characterized by having distinct statistical properties. We propose a Deep Boltzmann Machine for learning a generative model of such multimodal data. We show that the model can be used to create fused representations by combining features across modalities. These learned representations are useful for classification and information retrieval. By sampling from the conditional distributions over each data modality, it is possible to create these representations even when some data modalities are missing. We conduct experiments on bimodal image-text and audio-video data. The fused representation achieves good classification results on the MIR-Flickr data set matching or outperforming other deep models as well as SVM based models that use Multiple Kernel Learning. We further demonstrate that this multimodal model helps classification and retrieval even when only unimodal data is available at test time." ] }
1602.06697
2287105829
Hashing is widely applied to approximate nearest neighbor search for large-scale multimodal retrieval with storage and computation efficiency. Cross-modal hashing improves the quality of hash coding by exploiting semantic correlations across different modalities. Existing cross-modal hashing methods first transform data into low-dimensional feature vectors, and then generate binary codes by another separate quantization step. However, suboptimal hash codes may be generated since the quantization error is not explicitly minimized and the feature representation is not jointly optimized with the binary codes. This paper presents a Correlation Hashing Network (CHN) approach to cross-modal hashing, which jointly learns good data representation tailored to hash coding and formally controls the quantization error. The proposed CHN is a hybrid deep architecture that constitutes a convolutional neural network for learning good image representations, a multilayer perception for learning good text representations, two hashing layers for generating compact binary codes, and a structured max-margin loss that integrates all things together to enable learning similarity-preserving and high-quality hash codes. Extensive empirical study shows that CHN yields state of the art cross-modal retrieval performance on standard benchmarks.
In this work, we further extend existing deep hashing methods @cite_14 @cite_7 to cross-modal retrieval by exploiting two key problems: (1) control the quantization error in a principled way, and (2) devise a more principled loss to link the pairwise Hamming distances with the pairwise similarity labels and to reduce the gap between Hamming distance and cosine distance. These crucial improvements constitute the proposed Correlation Hashing Network (CHN) approach.
{ "cite_N": [ "@cite_14", "@cite_7" ], "mid": [ "2293824885", "1939575207" ], "abstract": [ "Hashing is a popular approximate nearest neighbor search approach for large-scale image retrieval. Supervised hashing, which incorporates similarity dissimilarity information on entity pairs to improve the quality of hashing function learning, has recently received increasing attention. However, in the existing supervised hashing methods for images, an input image is usually encoded by a vector of handcrafted visual features. Such hand-crafted feature vectors do not necessarily preserve the accurate semantic similarities of images pairs, which may often degrade the performance of hashing function learning. In this paper, we propose a supervised hashing method for image retrieval, in which we automatically learn a good image representation tailored to hashing as well as a set of hash functions. The proposed method has two stages. In the first stage, given the pairwise similarity matrix S over training images, we propose a scalable coordinate descent method to decompose S into a product of HHT where H is a matrix with each of its rows being the approximate hash code associated to a training image. In the second stage, we propose to simultaneously learn a good feature representation for the input images as well as a set of hash functions, via a deep convolutional network tailored to the learned hash codes in H and optionally the discrete class labels of the images. Extensive empirical evaluations on three benchmark datasets with different kinds of images show that the proposed method has superior performance gains over several state-of-the-art supervised and unsupervised hashing methods.", "Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution layers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate image features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to characterize that one image is more similar to the second image than to the third one. Extensive evaluations on several benchmark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substantial improvements over other state-of-the-art supervised or unsupervised hashing methods." ] }
1602.07064
2274108697
In this work we present SIFT, a 3-step algorithm for the analysis of the structural information represented by means of a taxonomy. The major advantage of this algorithm is the capability to leverage the information inherent to the hierarchical structures of taxonomies to infer correspondences which can allow to merge them in a later step. This method is particular relevant in scenarios where taxonomy alignment techniques exploiting textual information from taxonomy nodes cannot operate successfully.
The problem of aligning taxonomies have received much attention by the research community since various knowledge based applications, including clustering algorithms, browsing support interfaces, and recommendation systems, perform more effectively when they are supported with domain describing taxonomies, which help to resolve ambiguities and provide context @cite_2 . Furthermore, this problem is of great interest on a number of application areas, especially in scientific @cite_13 , business @cite_14 @cite_9 , and web data integration @cite_16 @cite_6 .
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_6", "@cite_2", "@cite_16", "@cite_13" ], "mid": [ "61729836", "2100365109", "1572864191", "131296644", "1979891362", "1566392645" ], "abstract": [ "This paper proposes SCHEMA, an algorithm for automated mapping between heterogeneous product taxonomies in the e-commerce domain. SCHEMA utilises word sense disambiguation techniques, based on the ideas from the algorithm proposed by Lesk, in combination with the semantic lexicon WordNet. For finding candidate map categories and determining the path-similarity we propose a node matching function that is based on the Levenshtein distance. The final mapping quality score is calculated using the Damerau-Levenshtein distance and a node-dissimilarity penalty. The performance of SCHEMA was tested on three real-life datasets and compared with PROMPT and the algorithm proposed by Park & Kim. It is shown that SCHEMA improves considerably on both recall and F @math -score, while maintaining similar precision.", "We address the problem of unsupervised matching of schema information from a large number of data sources into the schema of a data warehouse. The matching process is the first step of a framework to integrate data feeds from third-party data providers into a structured-search engine's data warehouse. Our experiments show that traditional schema-based and instance-based schema matching methods fall short. We propose a new technique based on the search engine's clicklogs. Two schema elements are matched if the distribution of keyword queries that cause click-throughs on their instances are similar. We present experiments on large commercial datasets that show the new technique has much better accuracy than traditional techniques.", "We present a knowledge-rich methodology for disambiguating Wikipedia categories with WordNet synsets and using this semantic information to restructure a taxonomy automatically generated from the Wikipedia system of categories. We evaluate against a manual gold standard and show that both category disambiguation and taxonomy restructuring perform with high accuracy. Besides, we assess these methods on automatically generated datasets and show that we are able to effectively enrich WordNet with a large number of instances from Wikipedia. Our approach produces an integrated resource, thus bringing together the fine-grained classification of instances in Wikipedia and a well-structured top-level taxonomy from WordNet.", "A valve unit for a flowing medium is combined with an electrical measuring system to provide measurement of the rate of flow of the medium. The valve unit is connected to a propeller-type rotor disposed in a conduit which communicates with the valve. The rotor incorporates a permanent magnet in at least one vane thereof and the measuring system includes a magnetic pulse generator which senses the magnetic field produced by the magnet and generates a corresponding train of pulses during the rotation of the rotor. An amplifier-differentiator connected to the pulse generator produces an output in accordance with the derivative of the pulses so as to provide an indication of the rate of flow.", "To operate effectively, the Semantic Web must be able to make explicit the semantics of Web resources via ontologies, which software agents use to automatically process these resources. The Web's natural semantic heterogeneity presents problems, however - namely, redundancy and ambiguity. The authors' ontology matching, clustering, and disambiguation techniques aim to bridge the gap between syntax and semantics for Semantic Web construction. Their approach discovers and represents the intended meaning of words in Web applications in a nonredundant way, while considering the context in which those words appear.", "Resources located in digital libraries are labeled (or classified) based on taxonomies. On multiple digital libraries, however, heterogeneity between taxonomies is a serious problem for efficient interoperation processes (e.g., information sharing and query transformation). In order to overcome this problem, we propose a novel framework based on aligning taxonomies of digital libraries. Thereby, the best mapping between concepts has to be discovered to maximize the summation of a set of partial similarities. For experimentation, three digital libraries were built based on different taxonomies. Taxonomy alignment-based resource retrieval was evaluated by human experts, and we measured recall and precision measures retrieved by concept replacement strategy." ] }
1602.07064
2274108697
In this work we present SIFT, a 3-step algorithm for the analysis of the structural information represented by means of a taxonomy. The major advantage of this algorithm is the capability to leverage the information inherent to the hierarchical structures of taxonomies to infer correspondences which can allow to merge them in a later step. This method is particular relevant in scenarios where taxonomy alignment techniques exploiting textual information from taxonomy nodes cannot operate successfully.
Taxonomy alignment techniques are able to detect taxonomy concepts that are equivalent. But, when can we say that two concepts are equivalent? If we attend only to the text label for representing the concepts, we can find many examples in everyday life, for instance, and or and seem to be equivalent concepts since they represent the same real idea or object. However, it is well known that when taxonomies are used as knowledge sources, the way users perceive the degree of likeness between pairs of concepts is highly dependent on the domain being explored @cite_2 . Therefore, synonymy between text labels is not always an equivalence indicator, so it is necessary to focus in the context the concepts are being considered.
{ "cite_N": [ "@cite_2" ], "mid": [ "131296644" ], "abstract": [ "A valve unit for a flowing medium is combined with an electrical measuring system to provide measurement of the rate of flow of the medium. The valve unit is connected to a propeller-type rotor disposed in a conduit which communicates with the valve. The rotor incorporates a permanent magnet in at least one vane thereof and the measuring system includes a magnetic pulse generator which senses the magnetic field produced by the magnet and generates a corresponding train of pulses during the rotation of the rotor. An amplifier-differentiator connected to the pulse generator produces an output in accordance with the derivative of the pulses so as to provide an indication of the rate of flow." ] }
1602.07064
2274108697
In this work we present SIFT, a 3-step algorithm for the analysis of the structural information represented by means of a taxonomy. The major advantage of this algorithm is the capability to leverage the information inherent to the hierarchical structures of taxonomies to infer correspondences which can allow to merge them in a later step. This method is particular relevant in scenarios where taxonomy alignment techniques exploiting textual information from taxonomy nodes cannot operate successfully.
Existing taxonomy alignment techniques focus on different dimensions of the problem, including whether data instances are used for matching @cite_8 , whether linguistic information and other auxiliary information are available @cite_5 , and whether the match is performed for complex structures @cite_15 . Our algorithm fits in this last category.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_8" ], "mid": [ "2088546564", "2156543375", "1592230611" ], "abstract": [ "Computing the semantic similarity between terms (or short text expressions) that have the same meaning but which are not lexicographically similar is an important challenge in the information integration field. The problem is that techniques for textual semantic similarity measurement often fail to deal with words not covered by synonym dictionaries. In this paper, we try to solve this problem by determining the semantic similarity for terms using the knowledge inherent in the search history logs from the Google search engine. To do this, we have designed and evaluated four algorithmic methods for measuring the semantic similarity between terms using their associated history search patterns. These algorithmic methods are: a) frequent co-occurrence of terms in search patterns, b) computation of the relationship between search patterns, c) outlier coincidence on search patterns, and d) forecasting comparisons. We have shown experimentally that some of these methods correlate well with respect to human judgment when evaluating general purpose benchmark datasets, and significantly outperform existing methods when evaluating datasets containing terms that do not usually appear in dictionaries.", "Matching elements of two data schemas or two data instances plays a key role in data warehousing, e-business, or even biochemical applications. In this paper we present a matching algorithm based on a fixpoint computation that is usable across different scenarios. The algorithm takes two graphs (schemas, catalogs, or other data structures) as input, and produces as output a mapping between corresponding nodes of the graphs. Depending on the matching goal, a subset of the mapping is chosen using filters. After our algorithm runs, we expect a human to check and if necessary adjust the results. As a matter of fact, we evaluate the 'accuracy' of the algorithm by counting the number of needed adjustments. We conducted a user study, in which our accuracy metric was used to estimate the labor savings that the users could obtain by utilizing our algorithm to obtain an initial matching. Finally, we illustrate how our matching algorithm is deployed as one of several high-level operators in an implemented testbed for managing information models and mappings.", "Ontology Matching aims to find the semantic correspondences between ontologies that belong to a single domain but that have been developed separately. However, there are still some problem areas to be solved, because experts are still needed to supervise the matching processes and an efficient way to reuse the alignments has not yet been found. We propose a novel technique named Reverse Ontology Matching, which aims to find the matching functions that were used in the original process. The use of these functions is very useful for aspects such as modeling behavior from experts, performing matching-by-example, reverse engineering existing ontology matching tools or compressing ontology alignment repositories. Moreover, the results obtained from a widely used benchmark dataset provide evidence of the effectiveness of this approach." ] }
1602.07064
2274108697
In this work we present SIFT, a 3-step algorithm for the analysis of the structural information represented by means of a taxonomy. The major advantage of this algorithm is the capability to leverage the information inherent to the hierarchical structures of taxonomies to infer correspondences which can allow to merge them in a later step. This method is particular relevant in scenarios where taxonomy alignment techniques exploiting textual information from taxonomy nodes cannot operate successfully.
Algorithms implementing techniques for matching complex structures are mostly based on heuristics. Heuristics consider, for example, that elements of two distinct taxonomies are similar if their direct sub-concepts, and or their direct super-concepts and or their brother concepts are similar @cite_1 . These structural techniques can be based on a fixed point like that proposed in @cite_0 , or can be viewed as a satisfiability problem of a set of propositional formulas @cite_3 . There are also some proposals to align taxonomies supposed to be asymmetric from a structural point of view @cite_12 , or to create matching functions by means of a composition of various techniques designed to make best use of the characteristics of the taxonomies @cite_1 .
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_12", "@cite_3" ], "mid": [ "2139135093", "1275318253", "2397222190", "2121382148" ], "abstract": [ "Schema matching is a critical step in many applications, such as XML message mapping, data warehouse loading, and schema integration. In this paper, we investigate algorithms for generic schema matching, outside of any particular data model or application. We first present a taxonomy for past solutions, showing that a rich range of techniques is available. We then propose a new algorithm, Cupid, that discovers mappings between schema elements based on their names, data types, constraints, and schema structure, using a broader set of techniques than past approaches. Some of our innovations are the integrated use of linguistic and structural matching, context-dependent matching of shared types, and a bias toward leaf structure where much of the schema content resides. After describing our algorithm, we present experimental results that compare Cupid to two other schema matching systems.", "This paper deals with taxonomy alignment. It presents structural techniques of an alignment method suitable with a dissymmetry in the structure of the mapped taxonomies. The aim is to allow a uniform access to documents belonging to a same application domain, assuming retrieval of documents is supported by taxonomies.", "TaxoMap is an alignment tool which aims to discover rich correspondences between concepts (equivalence relations (isEq), subsumption relations (isA) and their inverse (isMoreGnl) or proximity relations (isClose)). It performs an oriented alignment (from a source to a target ontology) and takes into account labels and sub-class descriptions. This new implementation of TaxoMap uses a pattern-based approach implemented in the TaxoMap Framework helping an engineer to refine mappings to take into account specific conventions used in ontologies.", "Matching hierarchical structures, like taxonomies or web directories, is the premise for enabling interoperability among heterogenous data organizations. While the number of new matching solutions is increasing the evaluation issue is still open. This work addresses the problem of comparison for pairwise matching solutions. A methodology is proposed to overcome the issue of scalability. A large scale dataset is developed based on real world case study namely, the web directories of Google, Looksmart and Yahoo!. Finally, an empirical evaluation is performed which compares the most representative solutions for taxonomy matching. We argue that the proposed dataset can play a key role in supporting the empirical analysis for the research effort in the area of taxonomy matching." ] }
1602.07064
2274108697
In this work we present SIFT, a 3-step algorithm for the analysis of the structural information represented by means of a taxonomy. The major advantage of this algorithm is the capability to leverage the information inherent to the hierarchical structures of taxonomies to infer correspondences which can allow to merge them in a later step. This method is particular relevant in scenarios where taxonomy alignment techniques exploiting textual information from taxonomy nodes cannot operate successfully.
Despite such advances in matching technologies, taxonomy alignments using linguistic information and other auxiliary information are rarely perfect @cite_17 . In particular, imperfection can be due to homonyms (i.e., nodes with identical concept-names, but possibly different semantics) and synonyms (concepts with different names but same semantics). However, the major advantage of pure structural matching techniques is that finding perfect alignments is possible in many cases.
{ "cite_N": [ "@cite_17" ], "mid": [ "1996675360" ], "abstract": [ "Computing the semantic similarity between terms (or short text expressions) that have the same meaning but which are not lexicographically similar is a key challenge in many computer related fields. The problem is that traditional approaches to semantic similarity measurement are not suitable for all situations, for example, many of them often fail to deal with terms not covered by synonym dictionaries or are not able to cope with acronyms, abbreviations, buzzwords, brand names, proper nouns, and so on. In this paper, we present and evaluate a collection of emerging techniques developed to avoid this problem. These techniques use some kinds of web intelligence to determine the degree of similarity between text expressions. These techniques implement a variety of paradigms including the study of co-occurrence, text snippet comparison, frequent pattern finding, or search log analysis. The goal is to substitute the traditional techniques where necessary." ] }
1602.06922
2274124280
We provide a variant of cross-polytope locality sensitive hashing with respect to angular distance which is provably optimal in asymptotic sensitivity and enjoys @math hash computation time. Building on a recent result (by Andoni, Indyk, Laarhoven, Razenshteyn, Schmidt, 2015), we show that optimal asymptotic sensitivity for cross-polytope LSH is retained even when the dense Gaussian matrix is replaced by a fast Johnson-Lindenstrauss transform followed by discrete pseudo-rotation, reducing the hash computation time from @math to @math . Moreover, our scheme achieves the optimal rate of convergence for sensitivity. By incorporating a low-randomness Johnson-Lindenstrauss transform, our scheme can be modified to require only @math random bits
Many of our results hinge on the careful analysis of collision probabilities for the cross-polytope LSH scheme given in @cite_1 . Additionally, various ways to reduce the runtime of cross-polytope LSH, specifically using fast, structured projection matrices, are mentioned in @cite_2 . They also define a generalization of cross-polytope lsh that first projects to a low dimensional subspace, but they never consider lifting back up to a high dimensional subspace again. Johnson-Lindenstrauss transforms have previously been used in many approximate nearest neighbors algorithms, (see @cite_4 , @cite_22 , @cite_9 , @cite_17 , @cite_8 , and @cite_6 , to name a few), primarily as a preprocessing step to speed up computations that have some dependence on the dimension. LSH with p-stable distributions, as introduced in @cite_3 , uses a random projection onto a single dimension, which is later generalized in @cite_20 to random projection onto @math dimensions, with the latter having optimal exponent @math . We make a note that our scheme uses dimension reduction slightly differently, as an intermediate step before lifting the vectors back up to a different dimension.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_8", "@cite_9", "@cite_1", "@cite_6", "@cite_3", "@cite_2", "@cite_20", "@cite_17" ], "mid": [ "2147717514", "2170605888", "", "2093813380", "2949388608", "2090836891", "2002359780", "", "2038276547", "2082042699" ], "abstract": [ "We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers.", "This paper concerns approximate nearest neighbor searching algorithms, which have become increasingly important, especially in high dimensional perception areas such as computer vision, with dozens of publications in recent years. Much of this enthusiasm is due to a successful new approximate nearest neighbor approach called Locality Sensitive Hashing (LSH). In this paper we ask the question: can earlier spatial data structure approaches to exact nearest neighbor, such as metric trees, be altered to provide approximate answers to proximity queries and if so, how? We introduce a new kind of metric tree that allows overlap: certain datapoints may appear in both the children of a parent. We also introduce new approximate k-NN search algorithms on this structure. We show why these structures should be able to exploit the same random-projection-based approximations that LSH enjoys, but with a simpler algorithm and perhaps with greater efficiency. We then provide a detailed empirical evaluation on five large, high dimensional datasets which show up to 31-fold accelerations over LSH. This result holds true throughout the spectrum of approximation levels.", "", "We introduce a new low-distortion embedding of @math into @math ( @math ) called the fast Johnson-Lindenstrauss transform (FJLT). The FJLT is faster than standard random projections and just as easy to implement. It is based upon the preconditioning of a sparse projection matrix with a randomized Fourier transform. Sparse random projections are unsuitable for low-distortion embeddings. We overcome this handicap by exploiting the “Heisenberg principle” of the Fourier transform, i.e., its local-global duality. The FJLT can be used to speed up search algorithms based on low-distortion embeddings in @math and @math . We consider the case of approximate nearest neighbors in @math . We provide a faster algorithm using classical projections, which we then speed up further by plugging in the FJLT. We also give a faster algorithm for searching over the hypercube.", "We show the existence of a Locality-Sensitive Hashing (LSH) family for the angular distance that yields an approximate Near Neighbor Search algorithm with the asymptotically optimal running time exponent. Unlike earlier algorithms with this property (e.g., Spherical LSH [Andoni, Indyk, Nguyen, Razenshteyn 2014], [Andoni, Razenshteyn 2015]), our algorithm is also practical, improving upon the well-studied hyperplane LSH [Charikar, 2002] in practice. We also introduce a multiprobe version of this algorithm, and conduct experimental evaluation on real and synthetic data sets. We complement the above positive results with a fine-grained lower bound for the quality of any LSH family for angular distance. Our lower bound implies that the above LSH family exhibits a trade-off between evaluation time and quality that is close to optimal for a natural class of LSH functions.", "Locality-sensitive hashing (LSH) is a basic primitive in several large-scale data processing applications, including nearest-neighbor search, de-duplication, clustering, etc. In this paper we propose a new and simple method to speed up the widely-used Euclidean realization of LSH. At the heart of our method is a fast way to estimate the Euclidean distance between two d-dimensional vectors; this is achieved by the use of randomized Hadamard transforms in a non-linear setting. This decreases the running time of a (k, L)-parameterized LSH from O(dkL) to O(dlog d + kL). Our experiments show that using the new LSH in nearest-neighbor applications can improve their running times by significant amounts. To the best of our knowledge, this is the first running time improvement to LSH that is both provable and practical.", "Given a metric space @math , @math , @math , and @math , a distribution over mappings @math is called a @math -sensitive hash family if any two points in @math at distance at most @math are mapped by @math to the same value with probability at least @math , and any two points at distance greater than @math are mapped by @math to the same value with probability at most @math . This notion was introduced by Indyk and Motwani in 1998 as the basis for an efficient approximate nearest neighbor search algorithm and has since been used extensively for this purpose. The performance of these algorithms is governed by the parameter @math , and constructing hash families with small @math automatically yields improved nearest neighbor algorithms. Here we show that for @math it is impossible to achieve @math . This almost matches the construction of Indyk and Motwani which achieves @math .", "", "In this article, we give an overview of efficient algorithms for the approximate and exact nearest neighbor problem. The goal is to preprocess a dataset of objects (e.g., images) so that later, given a new query object, one can quickly return the dataset object that is most similar to the query. The problem is of significant interest in a wide variety of areas.", "We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points xj in , the algorithm attempts to find k nearest neighbors for each of xj, where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k2·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among xj for an arbitrary point . The cost of each such query is proportional to T·(d·(log d) + log(N k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme’s behavior for certain types of distributions of xj and illustrate its performance via several numerical examples." ] }
1602.06922
2274124280
We provide a variant of cross-polytope locality sensitive hashing with respect to angular distance which is provably optimal in asymptotic sensitivity and enjoys @math hash computation time. Building on a recent result (by Andoni, Indyk, Laarhoven, Razenshteyn, Schmidt, 2015), we show that optimal asymptotic sensitivity for cross-polytope LSH is retained even when the dense Gaussian matrix is replaced by a fast Johnson-Lindenstrauss transform followed by discrete pseudo-rotation, reducing the hash computation time from @math to @math . Moreover, our scheme achieves the optimal rate of convergence for sensitivity. By incorporating a low-randomness Johnson-Lindenstrauss transform, our scheme can be modified to require only @math random bits
Similar dimension reduction techniques have been used in @cite_13 , where the data is sparsified and then a random projection matrix is applied. The authors exploit the fact that the random projection matrix will have the restricted isometry property, which preserves pairwise distances between any two sparse vectors. This result is notable in that the reduced dimension has no dependence on @math , the number of points. See section for more discussion.
{ "cite_N": [ "@cite_13" ], "mid": [ "2093480133" ], "abstract": [ "Extremal Combinatorics is one of the central areas in Discrete Mathematics. It deals with problems that are often motivated by questions arising in other areas, including Theoretical Computer Science, Geometry and Game Theory. This paper contains a collection of problems and results in the area, including solutions or partial solutions to open problems suggested by various researchers. The topics considered here include questions in Extremal Graph Theory, Polyhedral Combinatorics and Probabilistic Combinatorics. This is not meant to be a comprehensive survey of the area, it is merely a collection of various extremal problems, which are hopefully interesting. The choice of the problems is inevitably biased, and as the title of the paper suggests, it is a sequel to a previous paper [N. Alon, Problems and results in extremal combinatorics-I, Discrete Math. 273 (2003), 31-53.] of the same flavor, and hopefully a predecessor of another related future paper. Each section of this paper is essentially self contained, and can be read separately." ] }
1602.06703
2952733306
In social robotics, robots needs to be able to be understood by humans. Especially in collaborative tasks where they have to share mutual knowledge. For instance, in an educative scenario, learners share their knowledge and they must adapt their behaviour in order to make sure they are understood by others. Learners display behaviours in order to show their understanding and teachers adapt in order to make sure that the learners' knowledge is the required one. This ability requires a model of their own mental states perceived by others: In this paper, we discuss the importance of a cognitive architecture enabling second-order Mutual Modelling for Human-Robot Interaction in educative contexts.
A large amount of fields have introduced frameworks to describe mutual modelling ability @cite_2 . In developmental psychology Flavell @cite_6 denotes two different levels of perspective taking: the (I see, I hear, I want, I like...) and (what other agents feel, hear, want...).
{ "cite_N": [ "@cite_6", "@cite_2" ], "mid": [ "2011348940", "2085705051" ], "abstract": [ "Abstract In this article we propose an account of the early development of children's knowledge about the mind and report two studies designed to test a part of it. According to this “connections-representations” account, young children begin their discovery of the mental world by learning that they and other people have internal experiences or mental states that connect them cognitively to external objects and events-experiences such as seeing them, hearing them, and wanting them. Later, they realize that the same object can be seriously (other than in pretense) mentally represented in different, seemingly contradictory ways: for example, as A in appearance, B in reality, C according to their perceptual or conceptual perspective, and D according to another person's. The results of both studies confirmed the prediction that 3-year-olds would perform well on appearance-reality and perceptual perspective-taking tasks requiring only an understanding of cognitive connections, but perform poorly on tasks requiring an understanding of seemingly-contrary-to-fact mental representations. To illustrate, children of this age had little difficulty determining that they could hear but could not see a noise-making object located on the experimenter's side of a barrier, and that the experimenter could see it (connections-level tasks). In contrast, they were largely unable to say, for instance, that a toy bear held behind a large elephant mask and emitting a cat sound looked like an elephant, sounded like a cat, and really was a toy bear (representations-level tasks)—even though the experimenter had actually told them previously what it looked like, sounded like, and really was. The article concludes with speculations about the possible origins of connections and representations knowledge and observations about the significance of these acquisitions for the child's development.", "Mutual modelling, the reciprocal ability to establish a mental model of the other, plays a fundamental role in human interactions. This complex cognitive skill is however difficult to fully apprehend as it encompasses multiple neuronal, psychological and social mechanisms that are generally not easily turned into computational models suitable for robots. This article presents several perspectives on mutual modelling from a range of disciplines, and reflects on how these perspectives can be beneficial to the advancement of social cognition in robotics. We gather here both basic tools (concepts, formalisms, models) and exemplary experimental settings and methods that are of relevance to robotics. This contribution is expected to consolidate the corpus of knowledge readily available to human-robot interaction research, and to foster interest for this fundamentally cross-disciplinary field." ] }
1602.06703
2952733306
In social robotics, robots needs to be able to be understood by humans. Especially in collaborative tasks where they have to share mutual knowledge. For instance, in an educative scenario, learners share their knowledge and they must adapt their behaviour in order to make sure they are understood by others. Learners display behaviours in order to show their understanding and teachers adapt in order to make sure that the learners' knowledge is the required one. This ability requires a model of their own mental states perceived by others: In this paper, we discuss the importance of a cognitive architecture enabling second-order Mutual Modelling for Human-Robot Interaction in educative contexts.
Mutual modelling has also been studied through educational contexts. Roschelle and Teasley @cite_14 suggested that collaborative learning requires a of the task and of the shared information to solve it. The term mutual modelling" was introduced in Computer-Supported Collaborative Learning (CSCL) by Dillenbourg @cite_21 . It focused on knowledge states of agents. Dillenbourg developed in @cite_7 a computational framework to represent mutual modelling situations.
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_7" ], "mid": [ "1551289443", "2122575992", "2005936898" ], "abstract": [ "This paper focuses on the processes involved in collaboration using a microanalysis of one dyad’s work with a computer-based environment (the Envisioning Machine). The interaction between participants is analysed with respect to a ‘Joint Problem Space’, which comprises an emergent, socially-negotiated set of knowledge elements, such as goals, problem state descriptions and problem solving actions. Our analysis shows how this shared conceptual space is constructed through the external mediational framework of shared language, situation and activity. This approach has particular implications for understanding how the benefits of collaboration are realised and serves to clarify the possible roles of the computers in supporting collaborative learning.", "This book arises from a series of workshops on collaborative learning, that gathered together 20 scholars from the disciplines of psychology, education and computer science. The series was part of a research program entitled 'Learning in Humans and Machines' (LHM), launched by Peter Reimann and Hans Spada, and funded by the European Science Foundation. This program aimed to develop a multidisciplinary dialogue on learning, involving mainly scholars from cognitive psychology, educational science, and artificial intelligence (including machine learning). During the preparation of the program, Agnes Blaye, Claire O'Malley, Michael Baker and I developed a theme on collaborative learning. When the program officially began, 12 members were selected to work on this theme and formed the so-called 'task force 5'. I became the coordinator of the group. This group organised two workshops, in Sitges (Spain, 1994) and Aix-en-Provence (France, 1995). In 1996, the group was enriched with new members to reach its final size. Around 20 members met in the subsequent workshops, at Samoens (France, 1996), Houthalen (Belgium, 1996) and Mannheim (Germany, 1997). Several individuals joined the group for some time but have not written a chapter. I would nevertheless like to acknowledge their contributions to our activities: George Bilchev, Stevan Harnad, Calle Jansson and Claire O'Malley.", "It has been hypothesized that collaborative learning is related to the cognitive effort made by co-learners to build a shared understanding. The process of constructing this shared understanding requires that each team member builds some kind of representation of the behavior, beliefs, knowledge or intentions of other group members. In two empirical studies, we measured the accuracy of the mutual model, i.e. the difference between what A believes B knows, has done or intends to do and what B actually knows, has done or intends to do. In both studies, we found a significant correlation between the accuracy of A's model of B and the accuracy of B's model of A. This leads us to think that the process of modeling one's partners does not simply reflect individual attitudes or skills but emerges as a property of group interactions. We describe on-going studies that explore these preliminary results." ] }
1602.06703
2952733306
In social robotics, robots needs to be able to be understood by humans. Especially in collaborative tasks where they have to share mutual knowledge. For instance, in an educative scenario, learners share their knowledge and they must adapt their behaviour in order to make sure they are understood by others. Learners display behaviours in order to show their understanding and teachers adapt in order to make sure that the learners' knowledge is the required one. This ability requires a model of their own mental states perceived by others: In this paper, we discuss the importance of a cognitive architecture enabling second-order Mutual Modelling for Human-Robot Interaction in educative contexts.
However, HRI research has not, until now, explored the whole potential of mutual modelling. In @cite_18 , Scassellati supported the importance of Leslie's and Baron-Cohen's theory of mind to be implemented as an ability for robots. He focused his work on attention and perceptual processes (face detection or colour saliency detection). Thereafter, some works (including Breazeal @cite_9 , Trafton @cite_0 , Ros @cite_20 and Lemaignan @cite_3 ) were conduced to implement Flavell's first level of perspective taking @cite_13 ( "), ability that is still limited to visual perception.
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_3", "@cite_0", "@cite_13", "@cite_20" ], "mid": [ "2149234824", "1995599110", "", "2167150807", "2440172120", "2120601961" ], "abstract": [ "If we are to build human-like robots that can interact naturally with people, our robots must know not only about the properties of objects but also the properties of animate agents in the world. One of the fundamental social skills for humans is the attribution of beliefs, goals, and desires to other people. This set of skills has often been called a “theory of mind.” This paper presents the theories of Leslie (1994) and Baron-Cohen (1995) on the development of theory of mind in human children and discusses the potential application of both of these theories to building robots with similar capabilities. Initial implementation details and basic skills (such as finding faces and eyes and distinguishing animate from inanimate stimuli) are introduced. I further speculate on the usefulness of a robotic implementation in evaluating and comparing these two models.", "Abstract This paper addresses an important issue in learning from demonstrations that are provided by “naive” human teachers—people who do not have expertise in the machine learning algorithms used by the robot. We therefore entertain the possibility that, whereas the average human user may provide sensible demonstrations from a human’s perspective, these same demonstrations may be insufficient, incomplete, ambiguous, or otherwise “flawed” from the perspective of the training set needed by the learning algorithm to generalize properly. To address this issue, we present a system where the robot is modeled as a socially engaged and socially cognitive learner. We illustrate the merits of this approach through an example where the robot is able to correctly learn from “flawed” demonstrations by taking the visual perspective of the human instructor to clarify potential ambiguities.", "", "We propose that an important aspect of human-robot interaction is perspective-taking. We show how perspective-taking occurs in a naturalistic environment (astronauts working on a collaborative project) and present a cognitive architecture for performing perspective-taking called Polyscheme. Finally, we show a fully integrated system that instantiates our theoretical framework within a working robot system. Our system successfully solves a series of perspective-taking problems and uses the same frames of references that astronauts do to facilitate collaborative problem solving with a person.", "", "Humans constantly generate and solve ambiguities while interacting with each other in their every day activities. Hence, having a robot that is able to solve ambiguous situations is essential if we aim at achieving a fluent and acceptable human-robot interaction. We propose a strategy that combines three mechanisms to clarify ambiguous situations generated by the human partner. We implemented our approach and successfully performed validation tests in several different situations both, in simulation and with the HRP-2 robot." ] }
1602.06703
2952733306
In social robotics, robots needs to be able to be understood by humans. Especially in collaborative tasks where they have to share mutual knowledge. For instance, in an educative scenario, learners share their knowledge and they must adapt their behaviour in order to make sure they are understood by others. Learners display behaviours in order to show their understanding and teachers adapt in order to make sure that the learners' knowledge is the required one. This ability requires a model of their own mental states perceived by others: In this paper, we discuss the importance of a cognitive architecture enabling second-order Mutual Modelling for Human-Robot Interaction in educative contexts.
Breazeal @cite_5 and Warnier @cite_19 reproduced the Sally and Anne's test of Wimmer @cite_8 with robots able to perform visual perspective taking. The robot was able to infer the knowledge of a human given the history of his visual experience.
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_8" ], "mid": [ "2057401894", "2162104245", "2139188400" ], "abstract": [ "We have designed and implemented new spatio-temporal reasoning skills for a cognitive robot, which explicitly reasons about human beliefs on object positions. It enables the robot to build symbolic models reflecting each agent's perspective on the world. Using these models, the robot has a better understanding of what humans say and do, and is able to reason on what human should know to achieve a given goal. These new capabilities are also demonstrated experimentally.", "Future applications for personal robots motivate research into developing robots that are intelligent in their interactions with people. Toward this goal, in this paper we present an integrated socio-cognitive architecture to endow an anthropomorphic robot with the ability to infer mental states such as beliefs, intents, and desires from the observable behavior of its human partner. The design of our architecture is informed by recent findings from neuroscience and embodies cognition that reveals how living systems leverage their physical and cognitive embodiment through simulation-theoretic mechanisms to infer the mental states of others. We assess the robot's mindreading skills on a suite of benchmark tasks where the robot interacts with a human partner in a cooperative scenario and a learning scenario. In addition, we have conducted human subjects experiments using the same task scenarios to assess human performance on these tasks and to compare the robot's performance with that of people. In the process, our human subject studies also reveal some interesting insights into human behavior.", "‘A travelling salesman found himself spending the night at home with his wife when one of his trips was unexpectedly cancelled. The two of them were sound asleep, when in the middle of the night there was a loud knock at the front door. The wife woke up with a start and cried out, ‘Oh, my God! It’s my husband!* Whereupon the husband leapt out from the bed, ran across the room and jumped out the window.’ Schank and Abelson, 1977, p. 59." ] }
1602.06819
2278624983
In this paper we propose an online approximate k-nn graph building algorithm, which is able to quickly update a k-nn graph using a flow of data points. One very important step of the algorithm consists in using the current distributed graph to search for the neighbors of a new node. Hence we also propose a distributed partitioning method based on balanced k-medoids clustering, that we use to optimize the distributed search process. Finally, we present the improved sequential search procedure that is used inside each partition. We also perform an experimental evaluation of the different algorithms, where we study the influence of the parameters and compare the result of our algorithms to existing state of the art. This experimental evaluation confirms that the fast online k-nn graph building algorithm produces a graph that is highly similar to the graph produced by an offline exhaustive algorithm, while it requires less similarity computations.
Another approach is to use some kind of index to speedup nearest neighbors search. These techniques usually rely on the branch and bound algorithm, and the index is used to partition the data space. For example, a @math - @math tree, that recursively partitions the space into equally sized sub-spaces, can be used to speedup neighbor search @cite_0 . R-trees @cite_11 can also be used for euclidean spaces. In the case of generic metric spaces, vantage-point trees @cite_5 , also known as metric trees @cite_16 , and BK-trees can be used. But these approaches are hard to implement in parallel on a shared nothing architecture like MapReduce (MR) or Spark. In @cite_8 for example, the authors present a distributed @math -nn graph building algorithm, but use a shared memory architecture to store a kd-tree based index.
{ "cite_N": [ "@cite_8", "@cite_0", "@cite_5", "@cite_16", "@cite_11" ], "mid": [ "2161854574", "", "1970995121", "2049644877", "2118269922" ], "abstract": [ "We present a parallel algorithm for k-nearest neighbor graph construction that uses Morton ordering. Experiments show that our approach has the following advantages over existing methods: 1) faster construction of k-nearest neighbor graphs in practice on multicore machines, 2) less space usage, 3) better cache efficiency, 4) ability to handle large data sets, and 5) ease of parallelization and implementation. If the point set has a bounded expansion constant, our algorithm requires one-comparison-based parallel sort of points, according to Morton order plus near-linear additional steps to output the k-nearest neighbor graph.", "", "For some multimedia applications, it has been found that domain objects cannot be represented as feature vectors in a multidimensional space. Instead, pair-wise distances between data objects are the only input. To support content-based retrieval, one approach maps each object to a k-dimensional (k-d) point and tries to preserve the distances among the points. Then, existing spatial access index methods such as the R-trees and KD-trees can support fast searching on the resulting k-d points. However, information loss is inevitable with such an approach since the distances between data objects can only be preserved to a certain extent. Here we investigate the use of a distance-based indexing method. In particular, we apply the vantage point tree (vp-tree) method. There are two important problems for the vp-tree method that warrant further investigation, the n-nearest neighbors search and the updating mechanisms. We study an n-nearest neighbors search algorithm for the vp-tree, which is shown by experiments to scale up well with the size of the dataset and the desired number of nearest neighbors, n. Experiments also show that the searching in the vp-tree is more efficient than that for the @math -tree and the M-tree. Next, we propose solutions for the update problem for the vp-tree, and show by experiments that the algorithms are efficient and effective. Finally, we investigate the problem of selecting vantage-point, propose a few alternative methods, and study their impact on the number of distance computation.", "Abstract Divide-and-conquer search strategies are described for satisfying proximity queries involving arbitrary distance metrics.", "In order to handle spatial data efficiently, as required in computer aided design and geo-data applications, a database system needs an index mechanism that will help it retrieve data items quickly according to their spatial locations However, traditional indexing methods are not well suited to data objects of non-zero size located m multi-dimensional spaces In this paper we describe a dynamic index structure called an R-tree which meets this need, and give algorithms for searching and updating it. We present the results of a series of tests which indicate that the structure performs well, and conclude that it is useful for current database systems in spatial applications" ] }
1602.06819
2278624983
In this paper we propose an online approximate k-nn graph building algorithm, which is able to quickly update a k-nn graph using a flow of data points. One very important step of the algorithm consists in using the current distributed graph to search for the neighbors of a new node. Hence we also propose a distributed partitioning method based on balanced k-medoids clustering, that we use to optimize the distributed search process. Finally, we present the improved sequential search procedure that is used inside each partition. We also perform an experimental evaluation of the different algorithms, where we study the influence of the parameters and compare the result of our algorithms to existing state of the art. This experimental evaluation confirms that the fast online k-nn graph building algorithm produces a graph that is highly similar to the graph produced by an offline exhaustive algorithm, while it requires less similarity computations.
A different and versatile algorithm to efficiently compute an approximate @math -nn graph is described in @cite_7 . The algorithm, called nn-Descent, starts by creating edges between random nodes. Then, for each node, it computes the similarity between all neighbors of the current neighbors, to find better edges. The algorithm iterates until it cannot find better edges. The main advantage of this algorithm is that it works with any similarity measure.
{ "cite_N": [ "@cite_7" ], "mid": [ "2110026675" ], "abstract": [ "K-Nearest Neighbor Graph (K-NNG) construction is an important operation with many web related applications, including collaborative filtering, similarity search, and many others in data mining and machine learning. Existing methods for K-NNG construction either do not scale, or are specific to certain similarity measures. We present NN-Descent, a simple yet efficient algorithm for approximate K-NNG construction with arbitrary similarity measures. Our method is based on local search, has minimal space overhead and does not rely on any shared global index. Hence, it is especially suitable for large-scale applications where data structures need to be distributed over the network. We have shown with a variety of datasets and similarity measures that the proposed method typically converges to above 90 recall with each point comparing only to several percent of the whole dataset on average." ] }
1602.06819
2278624983
In this paper we propose an online approximate k-nn graph building algorithm, which is able to quickly update a k-nn graph using a flow of data points. One very important step of the algorithm consists in using the current distributed graph to search for the neighbors of a new node. Hence we also propose a distributed partitioning method based on balanced k-medoids clustering, that we use to optimize the distributed search process. Finally, we present the improved sequential search procedure that is used inside each partition. We also perform an experimental evaluation of the different algorithms, where we study the influence of the parameters and compare the result of our algorithms to existing state of the art. This experimental evaluation confirms that the fast online k-nn graph building algorithm produces a graph that is highly similar to the graph produced by an offline exhaustive algorithm, while it requires less similarity computations.
In @cite_14 , proposed a new sequential approximate NN search algorithm that relies on @math -nn graphs. The algorithm, called Graph Nearest Neighbor Search (GNNS), works by selecting initial nodes at random. For each node, the algorithm computes the similarity between query point and every neighbor. The most similar neighbors are selected, and the algorithm iterates until a depth of search @math is reached. It is thus a hill climbing'' algorithm. The most promising nodes are searched first, using the similarity between the query point and the node as a heuristic.
{ "cite_N": [ "@cite_14" ], "mid": [ "17346433" ], "abstract": [ "We introduce a new nearest neighbor search algorithm. The algorithm builds a nearest neighbor graph in an offline phase and when queried with a new point, performs hill-climbing starting from a randomly sampled node of the graph. We provide theoretical guarantees for the accuracy and the computational complexity and empirically show the effectiveness of this algorithm." ] }
1602.06819
2278624983
In this paper we propose an online approximate k-nn graph building algorithm, which is able to quickly update a k-nn graph using a flow of data points. One very important step of the algorithm consists in using the current distributed graph to search for the neighbors of a new node. Hence we also propose a distributed partitioning method based on balanced k-medoids clustering, that we use to optimize the distributed search process. Finally, we present the improved sequential search procedure that is used inside each partition. We also perform an experimental evaluation of the different algorithms, where we study the influence of the parameters and compare the result of our algorithms to existing state of the art. This experimental evaluation confirms that the fast online k-nn graph building algorithm produces a graph that is highly similar to the graph produced by an offline exhaustive algorithm, while it requires less similarity computations.
Multiple algorithms exist to perform graph partitioning. In @cite_3 , the authors proposed a distributed iterative algorithm that iteratively swaps the partition of two nodes to minimize the number of cuts. The algorithm is heavily based on MPI and requires a lot of communication between all nodes of the graph. In @cite_2 , the authors proposed and tested a Bulk Synchronous Parallel (BSP) version of the algorithm which makes it suitable for shared nothing architectures like Apache Spark. In @cite_4 , the authors proposed a streaming algorithm, that requires a single iteration to partition the graph. They experimentally compared various heuristics to assign nodes to a partition. They found the best performing heuristic was linear weighted deterministic greedy. This one assigns each node to the partition where it has the most edges, weighted by a linear penalty function based on the capacity of the partition.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "1971630691", "2128479026", "185000700" ], "abstract": [ "Extracting knowledge by performing computations on graphs is becoming increasingly challenging as graphs grow in size. A standard approach distributes the graph over a cluster of nodes, but performing computations on a distributed graph is expensive if large amount of data have to be moved. Without partitioning the graph, communication quickly becomes a limiting factor in scaling the system up. Existing graph partitioning heuristics incur high computation and communication cost on large graphs, sometimes as high as the future computation itself. Observing that the graph has to be loaded into the cluster, we ask if the partitioning can be done at the same time with a lightweight streaming algorithm. We propose natural, simple heuristics and compare their performance to hashing and METIS, a fast, offline heuristic. We show on a large collection of graph datasets that our heuristics are a significant improvement, with the best obtaining an average gain of 76 . The heuristics are scalable in the size of the graphs and the number of partitions. Using our streaming partitioning methods, we are able to speed up PageRank computations on Spark, a distributed computation system, by 18 to 39 for large social networks.", "Balanced graph partitioning is a well known NP-complete problem with a wide range of applications. These applications include many large-scale distributed problems including the optimal storage of large sets of graph-structured data over several hosts-a key problem in today's Cloud infrastructure. However, in very large-scale distributed scenarios, state-of-the-art algorithms are not directly applicable, because they typically involve frequent global operations over the entire graph. In this paper, we propose a fully distributed algorithm, called JA-BE-JA, that uses local search and simulated annealing techniques for graph partitioning. The algorithm is massively parallel: there is no central coordination, each node is processed independently, and only the direct neighbors of the node, and a small subset of random nodes in the graph need to be known locally. Strict synchronization is not required. These features allow JA-BE-JA to be easily adapted to any distributed graph-processing system from data centers to fully distributed networks. We perform a thorough experimental analysis, which shows that the minimal edge-cut value achieved by JA-BE-JA is comparable to state-of-the-art centralized algorithms such as METIS. In particular, on large social networks JA-BEJA outperforms METIS, which makes JA-BE-JA-a bottom-up, self-organizing algorithm-a highly competitive practical solution for graph partitioning.", "A significant part of the data produced every day by online services is structured as a graph. Therefore, there is the need for efficient processing and analysis solutions for large scale graphs. Among the others, the balanced graph partitioning is a well known NP-complete problem with a wide range of applications. Several solutions have been proposed so far, however most of the existing state-of-the-art algorithms are not directly applicable in very large-scale distributed scenarios. A recently proposed promising alternative exploits a vertex-center heuristics to solve the balance graph partitioning problem. Their algorithm is massively parallel: there is no central coordination, and each node is processed independently. Unfortunately, we found such algorithm to be not directly exploitable in current BSP-like distributed programming frameworks. In this paper we present the adaptations we applied to the original algorithm while implementing it on Spark, a state-of-the-art distributed framework for data processing." ] }
1602.06819
2278624983
In this paper we propose an online approximate k-nn graph building algorithm, which is able to quickly update a k-nn graph using a flow of data points. One very important step of the algorithm consists in using the current distributed graph to search for the neighbors of a new node. Hence we also propose a distributed partitioning method based on balanced k-medoids clustering, that we use to optimize the distributed search process. Finally, we present the improved sequential search procedure that is used inside each partition. We also perform an experimental evaluation of the different algorithms, where we study the influence of the parameters and compare the result of our algorithms to existing state of the art. This experimental evaluation confirms that the fast online k-nn graph building algorithm produces a graph that is highly similar to the graph produced by an offline exhaustive algorithm, while it requires less similarity computations.
However, when it comes to k-means, few balanced versions exist. In @cite_10 , the authors proposed a method that has a complexity @math , which makes it too complex for large graphs. In @cite_6 , the authors proposed the Frequency Sensitive Competitive Learning (FSCL) method, where the distance between a point and a centroid is multiplied by the number of points already assigned to this centroid. Bigger clusters are therefore less likely to win additional points. In @cite_12 , the authors used FSCL with additive bias instead of multiplicative bias. However, both methods offer no guarantee on the final number of points in each partition, and experimental results have shown the resulting partitioning is often largely imbalanced.
{ "cite_N": [ "@cite_10", "@cite_12", "@cite_6" ], "mid": [ "", "97540112", "2149521737" ], "abstract": [ "", "We present a k-means-based clustering algorithm, which optimizes mean square error, for given cluster sizes. A straightforward application is balanced clustering, where the sizes of each cluster are equal. In k-means assignment phase, the algorithm solves the assignment problem by Hungarian algorithm. This is a novel approach, and makes the assignment phase time complexity On 3, which is faster than the previous Ok 3.5 n 3.5 time linear programming used in constrained k-means. This enables clustering of bigger datasets of size over 5000 points.", "Competitive learning mechanisms for clustering, in general, suffer from poor performance for very high-dimensional (>1000) data because of \"curse of dimensionality\" effects. In applications such as document clustering, it is customary to normalize the high-dimensional input vectors to unit length, and it is sometimes also desirable to obtain balanced clusters, i.e., clusters of comparable sizes. The spherical kmeans (spkmeans) algorithm, which normalizes the cluster centers as well as the inputs, has been successfully used to cluster normalized text documents in 2000+ dimensional space. Unfortunately, like regular kmeans and its soft expectation-maximization-based version, spkmeans tends to generate extremely imbalanced clusters in high-dimensional spaces when the desired number of clusters is large (tens or more). This paper first shows that the spkmeans algorithm can be derived from a certain maximum likelihood formulation using a mixture of von Mises-Fisher distributions as the generative model, and in fact, it can be considered as a batch-mode version of (normalized) competitive learning. The proposed generative model is then adapted in a principled way to yield three frequency-sensitive competitive learning variants that are applicable to static data and produced high-quality and well-balanced clusters for high-dimensional data. Like kmeans, each iteration is linear in the number of data points and in the number of clusters for all the three algorithms. A frequency-sensitive algorithm to cluster streaming data is also proposed. Experimental results on clustering of high-dimensional text data sets are provided to show the effectiveness and applicability of the proposed techniques." ] }
1602.07107
2284172031
We propose a streaming algorithm for the binary classification of data based on crowdsourcing. The algorithm learns the competence of each labeller by comparing her labels to those of other labellers on the same tasks and uses this information to minimize the prediction error rate on each task. We provide performance guarantees of our algorithm for a fixed population of independent labellers. In particular, we show that our algorithm is optimal in the sense that the cumulative regret compared to the optimal decision with known labeller error probabilities is finite, independently of the number of tasks to label. The complexity of the algorithm is linear in the number of labellers and the number of tasks, up to some logarithmic factors. Numerical experiments illustrate the performance of our algorithm compared to existing algorithms, including simple majority voting and expectation-maximization algorithms, on both synthetic and real datasets.
A number of Bayesian techniques have also been proposed and applied to this problem, see @cite_9 @cite_17 @cite_5 @cite_18 @cite_3 @cite_12 and references therein. Of particular interest is the belief-propagation (BP) algorithm of Karger, Oh and Shah @cite_5 , which is provably order-optimal in terms of the number of labellers required per task for any given target error rate, in the limit of an infinite number of tasks and an infinite population of labellers. Another family of algorithms is based on the spectral analysis of some matrix representing the correlations between tasks or labellers. Gosh, Kale and McAfee @cite_22 work on the task-task matrix whose entries correspond to the number of labellers having labeled two tasks in the same manner, while Dalvi et al. @cite_4 work on the labeller-labeller matrix whose entries correspond to the number of tasks labeled in the same manner by two labellers. Both obtain performance guarantees by the perturbation analysis of the top eigenvector of the corresponding expected matrix. The BP algorithm of Karger, Oh and Shah is in fact closely related to these spectral algorithms: their message-passing scheme is very similar to the power-iteration method applied to the task-labeller matrix, as observed in @cite_5 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_9", "@cite_3", "@cite_5", "@cite_12", "@cite_17" ], "mid": [ "2129345386", "108763474", "1988560072", "2134305421", "2163522723", "2140890285", "", "2164124780" ], "abstract": [ "Crowdsourcing has become a popular paradigm for labeling large datasets. However, it has given rise to the computational task of aggregating the crowdsourced labels provided by a collection of unreliable annotators. We approach this problem by transforming it into a standard inference problem in graphical models, and applying approximate variational methods, including belief propagation (BP) and mean field (MF). We show that our BP algorithm generalizes both majority voting and a recent algorithm by [1], while our MF method is closely related to a commonly used EM algorithm. In both cases, we find that the performance of the algorithms critically depends on the choice of a prior distribution on the workers' reliability; by choosing the prior properly, both BP and MF (and EM) perform surprisingly well on both simulated and real-world datasets, competitive with state-of-the-art algorithms based on more complicated modeling assumptions.", "In this paper we analyze a crowdsourcing system consisting of a set of users and a set of binary choice questions. Each user has an unknown, fixed, reliability that determines the user's error rate in answering questions. The problem is to determine the truth values of the questions solely based on the user answers. Although this problem has been studied extensively, theoretical error bounds have been shown only for restricted settings: when the graph between users and questions is either random or complete. In this paper we consider a general setting of the problem where the user--question graph can be arbitrary. We obtain bounds on the error rate of our algorithm and show it is governed by the expansion of the graph. We demonstrate, using several synthetic and real datasets, that our algorithm outperforms the state of the art.", "A large fraction of user-generated content on the Web, such as posts or comments on popular online forums, consists of abuse or spam. Due to the volume of contributions on popular sites, a few trusted moderators cannot identify all such abusive content, so viewer ratings of contributions must be used for moderation. But not all viewers who rate content are trustworthy and accurate. What is a principled approach to assigning trust and aggregating user ratings, in order to accurately identify abusive content? In this paper, we introduce a framework to address the problem of moderating online content using crowdsourced ratings. Our framework encompasses users who are untrustworthy or inaccurate to an unknown extent --- that is, both the content and the raters are of unknown quality. With no knowledge whatsoever about the raters, it is impossible to do better than a random estimate. We present efficient algorithms to accurately detect abuse that only require knowledge about the identity of a single 'good' agent, who rates contributions accurately more than half the time. We prove that our algorithm can infer the quality of contributions with error that rapidly converges to zero as the number of observations increases; we also numerically demonstrate that the algorithm has very high accuracy for much fewer observations. Finally, we analyze the robustness of our algorithms to manipulation by adversarial or strategic raters, an important issue in moderating online content, and quantify how the performance of the algorithm degrades with the number of manipulating agents.", "For many supervised learning tasks it may be infeasible (or very expensive) to obtain objective and reliable labels. Instead, we can collect subjective (possibly noisy) labels from multiple experts or annotators. In practice, there is a substantial amount of disagreement among the annotators, and hence it is of great practical interest to address conventional supervised learning problems in this scenario. In this paper we describe a probabilistic approach for supervised learning when we have multiple annotators providing (possibly noisy) labels but no absolute gold standard. The proposed algorithm evaluates the different experts and also gives an estimate of the actual hidden labels. Experimental results indicate that the proposed method is superior to the commonly used majority voting baseline.", "Crowdsourcing systems, in which numerous tasks are electronically distributed to numerous “information pieceworkers,” have emerged as an effective paradigm for human-powered solving of large-scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all such systems must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in an appropriate manner, e.g., majority voting. In this paper, we consider a general model of such crowdsourcing tasks and pose the problem of minimizing the total price i.e., number of task assignments that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm, inspired by belief propagation and low-rank matrix approximation, significantly outperforms majority voting and, in fact, is optimal through comparison to an oracle that knows the reliability of every worker. Further, we compare our approach with a more general class of algorithms that can dynamically assign tasks. By adaptively deciding which questions to ask to the next set of arriving workers, one might hope to reduce uncertainty more efficiently. We show that, perhaps surprisingly, the minimum price necessary to achieve a target reliability scales in the same manner under both adaptive and nonadaptive scenarios. Hence, our nonadaptive approach is order optimal under both scenarios. This strongly relies on the fact that workers are fleeting and cannot be exploited. Therefore, architecturally, our results suggest that building a reliable worker-reputation system is essential to fully harnessing the potential of adaptive designs.", "Crowdsourcing systems, in which tasks are electronically distributed to numerous \"information piece-workers\", have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a general model of such crowdsourcing tasks, and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm significantly outperforms majority voting and, in fact, is asymptotically optimal through comparison to an oracle that knows the reliability of every worker.", "", "Labeling large datasets has become faster, cheaper, and easier with the advent of crowdsourcing services like Amazon Mechanical Turk. How can one trust the labels obtained from such services? We propose a model of the labeling process which includes label uncertainty, as well a multi-dimensional measure of the annotators' ability. From the model we derive an online algorithm that estimates the most likely value of the labels and the annotator abilities. It finds and prioritizes experts when requesting labels, and actively excludes unreliable annotators. Based on labels already obtained, it dynamically chooses which images will be labeled next, and how many labels to request in order to achieve a desired level of confidence. Our algorithm is general and can handle binary, multi-valued, and continuous annotations (e.g. bounding boxes). Experiments on a dataset containing more than 50,000 labels show that our algorithm reduces the number of labels required, and thus the total cost of labeling, by a large factor while keeping error rates low on a variety of datasets." ] }
1602.07107
2284172031
We propose a streaming algorithm for the binary classification of data based on crowdsourcing. The algorithm learns the competence of each labeller by comparing her labels to those of other labellers on the same tasks and uses this information to minimize the prediction error rate on each task. We provide performance guarantees of our algorithm for a fixed population of independent labellers. In particular, we show that our algorithm is optimal in the sense that the cumulative regret compared to the optimal decision with known labeller error probabilities is finite, independently of the number of tasks to label. The complexity of the algorithm is linear in the number of labellers and the number of tasks, up to some logarithmic factors. Numerical experiments illustrate the performance of our algorithm compared to existing algorithms, including simple majority voting and expectation-maximization algorithms, on both synthetic and real datasets.
A recent paper proposes an algorithm based on the notion of minimax conditional entropy @cite_19 , based on some probabilistic model jointly parameterized by the labeller ability and the task difficulty. The algorithm is evaluated through numerical experiments on real datasets only; no theoretical results are provided on the performance and the complexity of the algorithm.
{ "cite_N": [ "@cite_19" ], "mid": [ "1814633089" ], "abstract": [ "There is a rapidly increasing interest in crowdsourcing for data labeling. By crowdsourcing, a large number of labels can be often quickly gathered at low cost. However, the labels provided by the crowdsourcing workers are usually not of high quality. In this paper, we propose a minimax conditional entropy principle to infer ground truth from noisy crowdsourced labels. Under this principle, we derive a unique probabilistic labeling model jointly parameterized by worker ability and item difficulty. We also propose an objective measurement principle, and show that our method is the only method which satisfies this objective measurement principle. We validate our method through a variety of real crowdsourcing datasets with binary, multiclass or ordinal labels." ] }
1602.07107
2284172031
We propose a streaming algorithm for the binary classification of data based on crowdsourcing. The algorithm learns the competence of each labeller by comparing her labels to those of other labellers on the same tasks and uses this information to minimize the prediction error rate on each task. We provide performance guarantees of our algorithm for a fixed population of independent labellers. In particular, we show that our algorithm is optimal in the sense that the cumulative regret compared to the optimal decision with known labeller error probabilities is finite, independently of the number of tasks to label. The complexity of the algorithm is linear in the number of labellers and the number of tasks, up to some logarithmic factors. Numerical experiments illustrate the performance of our algorithm compared to existing algorithms, including simple majority voting and expectation-maximization algorithms, on both synthetic and real datasets.
Some authors consider slightly different versions of our problem. Ho et al. @cite_10 @cite_7 assume that the ground truth is known for some tasks and use the corresponding data to learn the competence of the labellers in the exploration phase and to assign tasks optimally in the exploitation phase. Liu and Liu @cite_8 also look for the optimal task assignment but without the knowledge of any true label: an iterative algorithm similar to EM algorithms is used to infer the competence of each labeller, yielding a cumulative regret in @math for @math tasks compared to the optimal decision. Finally, some authors seek to rank the labellers with respect to their error rates, an information which is useful for task assignment but not easy to exploit for data classification itself @cite_14 @cite_2 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_2", "@cite_10" ], "mid": [ "2164545125", "", "1989135160", "1991137221", "2398690976" ], "abstract": [ "Inferring rankings over elements of a set of objects, such as documents or images, is a key learning problem for such important applications as Web search and recommender systems. Crowdsourcing services provide an inexpensive and efficient means to acquire preferences over objects via labeling by sets of annotators. We propose a new model to predict a gold-standard ranking that hinges on combining pairwise comparisons via crowdsourcing. In contrast to traditional ranking aggregation methods, the approach learns about and folds into consideration the quality of contributions of each annotator. In addition, we minimize the cost of assessment by introducing a generalization of the traditional active learning scenario to jointly select the annotator and pair to assess while taking into account the annotator quality, the uncertainty over ordering of the pair, and the current model uncertainty. We formalize this as an active learning strategy that incorporates an exploration-exploitation tradeoff and implement it using an efficient online Bayesian updating scheme. Using simulated and real-world data, we demonstrate that the active learning strategy achieves significant reductions in labeling cost while maintaining accuracy.", "", "We consider a crowd-sourcing problem where in the process of labeling massive datasets, multiple labelers with unknown annotation quality must be selected to perform the labeling task for each incoming data sample or task, with the results aggregated using for example simple or weighted majority voting rule. In this paper we approach this labeler selection problem in an online learning framework, whereby the quality of the labeling outcome by a specific set of labelers is estimated so that the learning algorithm over time learns to use the most effective combinations of labelers. This type of online learning in some sense falls under the family of multi-armed bandit (MAB) problems, but with a distinct feature not commonly seen: since the data is unlabeled to begin with and the labelers' quality is unknown, their labeling outcome (or reward in the MAB context) cannot be directly verified; it can only be estimated against the crowd and known probabilistically. We design an efficient online algorithm LS_OL using a simple majority voting rule that can differentiate high- and low-quality labelers over time, and is shown to have a regret (w.r.t. always using the optimal set of labelers) of O(log2 T) uniformly in time under mild assumptions on the collective quality of the crowd, thus regret free in the average sense. We discuss performance improvement by using a more sophisticated majority voting rule, and show how to detect and filter out \"bad\" (dishonest, malicious or very incompetent) labelers to further enhance the quality of crowd-sourcing. Extension to the case when a labeler's quality is task-type dependent is also discussed using techniques from the literature on continuous arms. We present numerical results using both simulation and a real dataset on a set of images labeled by Amazon Mechanic Turks (AMT).", "In a broad range of classification and decision-making problems, one is given the advice or predictions of several classifiers, of unknown reliability, over multiple questions or queries. This scenario is different from the standard supervised setting, where each classifier’s accuracy can be assessed using available labeled data, and raises two questions: Given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to (i) reliably rank them and (ii) construct a metaclassifier more accurate than most classifiers in the ensemble? Here we present a spectral approach to address these questions. First, assuming conditional independence between classifiers, we show that the off-diagonal entries of their covariance matrix correspond to a rank-one matrix. Moreover, the classifiers can be ranked using the leading eigenvector of this covariance matrix, because its entries are proportional to their balanced accuracies. Second, via a linear approximation to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML), an unsupervised ensemble classifier whose weights are equal to these eigenvector entries. On both simulated and real data, SML typically achieves a higher accuracy than most classifiers in the ensemble and can provide a better starting point than majority voting for estimating the maximum likelihood solution. Furthermore, SML is robust to the presence of small malicious groups of classifiers designed to veer the ensemble prediction away from the (unknown) ground truth.", "We explore the problem of assigning heterogeneous tasks to workers with different, unknown skill sets in crowdsourcing markets such as Amazon Mechanical Turk. We first formalize the online task assignment problem, in which a requester has a fixed set of tasks and a budget that specifies how many times he would like each task completed. Workers arrive one at a time (with the same worker potentially arriving multiple times), and must be assigned to a task upon arrival. The goal is to allocate workers to tasks in a way that maximizes the total benefit that the requester obtains from the completed work. Inspired by recent research on the online adwords problem, we present a two-phase exploration-exploitation assignment algorithm and prove that it is competitive with respect to the optimal offline algorithm which has access to the unknown skill levels of each worker. We empirically evaluate this algorithm using data collected on Mechanical Turk and show that it performs better than random assignment or greedy algorithms. To our knowledge, this is the first work to extend the online primal-dual technique used in the online adwords problem to a scenario with unknown parameters, and the first to offer an empirical validation of an online primal-dual algorithm." ] }
1602.07025
2282375135
We give a sufficient criterion for generic local functional equations for submodule zeta functions associated to nilpotent algebras of endomorphisms defined over number fields. This allows us, in particular, to prove various conjectures on such functional equations for ideal zeta functions of nilpotent Lie lattices. Via the Mal'cev correspondence, these results have corollaries pertaining to zeta functions enumerating normal subgroups of finite index in finitely generated nilpotent groups, most notably finitely generated free nilpotent groups of any given class.
Our proof of Theorem , presented in , proceeds by adapting the @math -adic machinery developed in @cite_19 . There, this technique is applied to establish generic local functional equations for a range of zeta functions of groups and rings. The most general of these applications is to subring zeta functions of arbitrary rings of finite additive rank, i.e. finitely generated abelian groups with some bi-additive multiplicative structure; see [Theorem A] Voll 10 . Via the Mal'cev correspondence, this translates into results for the generic Euler factors of the subgroup zeta functions of finitely generated nilpotent groups, i.e. Dirichlet generating series enumerating all finite index subgroups of such a group; see [Corollary 1.1] Voll 10 . In [Theorem C] Voll 10 we prove functional equations for generic local ideal zeta functions of nilpotent Lie rings of class @math (or, equivalently, generic local normal zeta functions of finitely generated nilpotent groups of class @math ). Theorem generalizes this result.
{ "cite_N": [ "@cite_19" ], "mid": [ "2023829897" ], "abstract": [ "We introduce a new method to compute explicit formulae for various zeta functions associated to groups and rings. The specific form of these formulae enables us to deduce local functional equations. More precisely, we prove local functional equations for the subring zeta functions associated to rings, the subgroup, conjugacy and representation zeta functions of finitely generated, torsion-free nilpotent (or T -)groups, and the normal zeta functions of T -groups of class 2. In particular we solve the two problems posed in [9, Section 5]. We deduce our theorems from a ‘blueprint result’ on certain p-adic integrals which generalises work of Denef and others on Igusa’s local zeta function. The Malcev correspondence and a Kirillov-type theory developed by Howe are used to ‘linearise’ the problems of counting subgroups and representations in T -groups, respectively." ] }
1602.07025
2282375135
We give a sufficient criterion for generic local functional equations for submodule zeta functions associated to nilpotent algebras of endomorphisms defined over number fields. This allows us, in particular, to prove various conjectures on such functional equations for ideal zeta functions of nilpotent Lie lattices. Via the Mal'cev correspondence, these results have corollaries pertaining to zeta functions enumerating normal subgroups of finite index in finitely generated nilpotent groups, most notably finitely generated free nilpotent groups of any given class.
In @cite_14 , Woodward computed the ideal zeta functions of the full upper triangular @math -matrices over @math , as well as a number of combinatorially defined quotients of these algebras. He gives sufficient criteria for local functional equations, as well as some examples suggesting that these criteria might be necessary.
{ "cite_N": [ "@cite_14" ], "mid": [ "2068681643" ], "abstract": [ "We prove a two-part theorem on local ideal zeta functions of Lie rings of upper-triangular matrices. First, we prove that these local zeta functions display a strong uniformity. Secondly, we prove that these zeta functions satisfy a local functional equation. Some explicit examples of these zeta functions are also presented. Finally, we consider certain quotients of these Lie rings, showing that the strong uniformity continues to hold, and that under certain circumstances the functional equation does too." ] }
1602.07025
2282375135
We give a sufficient criterion for generic local functional equations for submodule zeta functions associated to nilpotent algebras of endomorphisms defined over number fields. This allows us, in particular, to prove various conjectures on such functional equations for ideal zeta functions of nilpotent Lie lattices. Via the Mal'cev correspondence, these results have corollaries pertaining to zeta functions enumerating normal subgroups of finite index in finitely generated nilpotent groups, most notably finitely generated free nilpotent groups of any given class.
Local functional equations akin to are also ubiquitous in the theory of representation zeta functions associated to arithmetic (and related pro- @math ) groups; see, for instance, @cite_10 @cite_17 .
{ "cite_N": [ "@cite_10", "@cite_17" ], "mid": [ "2962814805", "2093477771" ], "abstract": [ "We introduce new methods from p-adic integration into the study of representation zeta functions associated to compact p-adic analytic groups and arithmetic groups. They allow us to establish that the representation zeta functions of generic members of families of p-adic analytic pro-p groups obtained from a global, perfect' Lie lattice satisfy functional equations. In the case of semisimple' compact p-adic analytic groups, we exhibit a link between the relevant p-adic integrals and a natural filtration of the locus of irregular elements in the associated semisimple Lie algebra, defined by centraliser dimension. Based on this algebro-geometric description, we compute explicit formulae for the representation zeta functions of principal congruence subgroups of the groups SL_3(O), where O is a compact discrete valuation ring of characteristic 0, and of the corresponding unitary groups. These formulae, combined with approximative Clifford theory, allow us to determine the abscissae of convergence of representation zeta functions associated to arithmetic subgroups of algebraic groups of type A_2. Assuming a conjecture of Serre on the Congruence Subgroup Problem, we thereby prove a conjecture of Larsen and Lubotzky on lattices in higher-rank semisimple groups for algebraic groups of type A_2 defined over number fields.", "We study representation zeta functions of finitely generated, torsion-free nilpotent groups which are groups of rational points of unipotent group schemes over rings of integers of number fields. Using the Kirillov orbit method and @math -adic integration, we prove rationality and functional equations for almost all local factors of the Euler products of these zeta functions. We further give explicit formulae, in terms of Dedekind zeta functions, for the zeta functions of class- @math -nilpotent groups obtained from three infinite families of group schemes, generalizing the integral Heisenberg group. As an immediate corollary, we obtain precise asymptotics for the representation growth of these groups, and key analytic properties of their zeta functions, such as meromorphic continuation. We express the local factors of these zeta functions in terms of generating functions for finite Weyl groups of type @math . This allows us to establish a formula for the joint distribution of three functions, or statistics'', on such Weyl groups. Finally, we compare our explicit formulae to @math -adic integrals associated to relative invariants of three infinite families of prehomogeneous vector spaces." ] }
1602.07113
2286327602
The Kucera-Gacs theorem is a landmark result in algorithmic randomness asserting that every real is computable from a Martin-Lof random real. If the computation of the first @math bits of a sequence requires @math bits of the random oracle, then @math is the redundancy of the computation. Kucera implicitly achieved redundancy @math while Gacs used a more elaborate coding procedure which achieves redundancy @math . A similar upper bound is implicit in the later proof by Merkle and Mihailovic. In this paper we obtain strict optimal lower bounds on the redundancy in computations from Martin-Lof random oracles. We show that any nondecreasing computable function @math such that @math is not a general upper bound on the redundancy in computations from Martin-Lof random oracles. In fact, there exists a real @math such that the redundancy @math of any computation of @math from a Martin-Lof random oracle satisfies @math . Moreover, the class of such reals is comeager and includes a @math real as well as all weakly 2-generic reals. This excludes many slow growing functions such as @math from bounding the redundancy in computations from random oracles for a large class of reals. On the other hand it was recently shown that if @math then @math is a general upper bound for the redundancy in computations of any real from some Martin-Lof random oracle. Our results are obtained as an application of a theory of effective betting strategies with restricted wagers which we develop.
Asymptotic conditions on the redundancy @math in computations from random oracles such as the ones in Theorem , have been used with respect to Chaitin's @math in Tadaki @cite_1 and Barmpalias, Fang and Lewis-Pye @cite_6 . However the latter work only refers to computations of computably enumerable sets and reals and does not have essential connections with the present work, except perhaps for some apparent analogy of the statements proved.
{ "cite_N": [ "@cite_1", "@cite_6" ], "mid": [ "117701256", "2964310650" ], "abstract": [ "Chaitin [G. J. Chaitin, J. Assoc. Comput. Mach. , vol. 22, pp. 329---340, 1975] introduced his Ω number as a concrete example of random real. The real Ω is defined as the probability that an optimal computer halts, where the optimal computer is a universal decoding algorithm used to define the notion of program-size complexity. Chaitin showed Ω to be random by discovering the property that the first n bits of the base-two expansion of Ω solve the halting problem of the optimal computer for all binary inputs of length at most n . In the present paper we investigate this property from various aspects. It is known that the base-two expansion of Ω and the halting problem are Turing equivalent. We consider elaborations of both the Turing reductions which constitute the Turing equivalence. These elaborations can be seen as a variant of the weak truth-table reduction, where a computable bound on the use function is explicitly specified. We thus consider the relative computational power between the base-two expansion of Ω and the halting problem by imposing the restriction to finite size on both the problems.", "We characterise the asymptotic upper bounds on the use of Chaitin's ź in oracle computations of halting probabilities (i.e. c.e. reals). We show that the following two conditions are equivalent for any computable function h such that h ( n ) - n is non-decreasing: (1) h ( n ) - n is an information content measure, i.e. the series ź n 2 n - h ( n ) converges, (2) for every c.e. real α there exists a Turing functional via which ź computes α with use bounded by h. We also give a similar characterisation with respect to computations of c.e. sets from ź, by showing that the following are equivalent for any computable non-decreasing function g: (1) g is an information-content measure, (2) for every c.e. set A, ź computes A with use bounded by g. Further results and some connections with Solovay functions (studied by a number of authors 38,3,26,11) are given. Optimal oracle-use of Chaitin's Omega number.Qualification and optimization of the completeness of Chaitin's Omega.Tight bounds on oracle use in the computably enumerable sets and reals." ] }
1602.06977
2281258985
From smart homes that prepare coffee when we wake, to phones that know not to interrupt us during important conversations, our collective visions of HCI imagine a future in which computers understand a broad range of human behaviors. Today our systems fall short of these visions, however, because this range of behaviors is too large for designers or programmers to capture manually. In this paper, we instead demonstrate it is possible to mine a broad knowledge base of human behavior by analyzing more than one billion words of modern fiction. Our resulting knowledge base, Augur, trains vector models that can predict many thousands of user activities from surrounding objects in modern contexts: for example, whether a user may be eating food, meeting with a friend, or taking a selfie. Augur uses these predictions to identify actions that people commonly take on objects in the world and estimate a user's future activities given their current situation. We demonstrate Augur-powered, activity-based systems such as a phone that silences itself when the odds of you answering it are low, and a dynamic music player that adjusts to your present activity. A field deployment of an Augur-powered wearable camera resulted in 96 recall and 71 precision on its unsupervised predictions of common daily activities. A second evaluation where human judges rated the system's predictions over a broad set of input images found that 94 were rated sensible.
Our work is inspired by techniques for mining user behavior from data. For example, query-feature graphs show how to encode the relationships between high-level descriptions of user goals and underlying features of a system @cite_5 , even when these high-level descriptions are different from an application's domain language @cite_18 . Researchers have applied these techniques to applications such as AutoCAD @cite_4 and Photoshop @cite_18 , where the user's description of a domain and that domain's underlying mechanics are often disjoint. With Augur, we introduce techniques that mine real-world human activities that typically occur outside of software.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_4" ], "mid": [ "2167213883", "2161968588", "2116436752" ], "abstract": [ "This paper introduces query-feature graphs, or QF-graphs. QF-graphs encode associations between high-level descriptions of user goals (articulated as natural language search queries) and the specific features of an interactive system relevant to achieving those goals. For example, a QF-graph for the GIMP graphics manipulation software links the query \"GIMP black and white\" to the commands \"desaturate\" and \"grayscale.\" We demonstrate how QF-graphs can be constructed using search query logs, search engine results, web page content, and localization data from interactive systems. An analysis of QF-graphs shows that the associations produced by our approach exhibit levels of accuracy that make them eminently usable in a range of real-world applications. Finally, we present three hypothetical user interface mechanisms that illustrate the potential of QF-graphs: search-driven interaction, dynamic tooltips, and app-to-app analogy search.", "Users often describe what they want to accomplish with an application in a language that is very different from the application's domain language. To address this gap between system and human language, we propose modeling an application's domain language by mining a large corpus of Web documents about the application using deep learning techniques. A high dimensional vector space representation can model the relationships between user tasks, system commands, and natural language descriptions and supports mapping operations, such as identifying likely system commands given natural language queries and identifying user tasks given a trace of user operations. We demonstrate the feasibility of this approach with a system, CommandSpace, for the popular photo editing application Adobe Photoshop. We build and evaluate several applications enabled by our model showing the power and flexibility of this approach.", "We explore the use of modern recommender system technology to address the problem of learning software applications. Before describing our new command recommender system, we first define relevant design considerations. We then discuss a 3 month user study we conducted with professional users to evaluate our algorithms which generated customized recommendations for each user. Analysis shows that our item-based collaborative filtering algorithm generates 2.1 times as many good suggestions as existing techniques. In addition we present a prototype user interface to ambiently present command recommendations to users, which has received promising initial user feedback." ] }
1602.06977
2281258985
From smart homes that prepare coffee when we wake, to phones that know not to interrupt us during important conversations, our collective visions of HCI imagine a future in which computers understand a broad range of human behaviors. Today our systems fall short of these visions, however, because this range of behaviors is too large for designers or programmers to capture manually. In this paper, we instead demonstrate it is possible to mine a broad knowledge base of human behavior by analyzing more than one billion words of modern fiction. Our resulting knowledge base, Augur, trains vector models that can predict many thousands of user activities from surrounding objects in modern contexts: for example, whether a user may be eating food, meeting with a friend, or taking a selfie. Augur uses these predictions to identify actions that people commonly take on objects in the world and estimate a user's future activities given their current situation. We demonstrate Augur-powered, activity-based systems such as a phone that silences itself when the odds of you answering it are low, and a dynamic music player that adjusts to your present activity. A field deployment of an Augur-powered wearable camera resulted in 96 recall and 71 precision on its unsupervised predictions of common daily activities. A second evaluation where human judges rated the system's predictions over a broad set of input images found that 94 were rated sensible.
Our research also benefits from prior work in commonsense knowledge representation. Existing databases of linguistic and commonsense knowledge provide networks of facts that computers should know about the world @cite_16 . Augur captures a set of relations that focus more deeply on human behavior and the causal relationships between human activities. We draw on forms of commonsense knowledge, like the WordNet hierarchy of synonym sets @cite_13 , to more precisely extract human activities from fiction. Parts of this vocabulary may be mineable from social media, if they are of the sort that people are likely to advertise on Twitter @cite_20 . We find that fiction offers a broader set of local activities.
{ "cite_N": [ "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "2016089260", "2081580037", "2470543263" ], "abstract": [ "ConceptNet is a freely available commonsense knowledge base and natural-language-processing tool-kit which supports many practical textual-reasoning tasks over real-world documents including topic-gisting, analogy-making, and other context oriented inferences. The knowledge base is a semantic network presently consisting of over 1.6 million assertions of commonsense knowledge encompassing the spatial, physical, social, temporal, and psychological aspects of everyday life. ConceptNet is generated automatically from the 700 000 sentences of the Open Mind Common Sense Project — a World Wide Web based collaboration with over 14 000 authors.", "Because meaningful sentences are composed of meaningful words, any system that hopes to process natural languages as people do must have information about words and their meanings. This information is traditionally provided through dictionaries, and machine-readable dictionaries are now widely available. But dictionary entries evolved for the convenience of human readers, not for machines. WordNet 1 provides a more effective combination of traditional lexicographic information and modern computing. WordNet is an online lexical database designed for use under program control. English nouns, verbs, adjectives, and adverbs are organized into sets of synonyms, each representing a lexicalized concept. Semantic relations link the synonym sets [4].", "" ] }
1602.06731
2285542994
We consider the problem of detecting norm violations in open multi-agent systems (MAS). We show how, using ideas from scrip systems, we can design mechanisms where the agents comprising the MAS are incentivised to monitor the actions of other agents for norm violations. The cost of providing the incentives is not borne by the MAS and does not come from fines charged for norm violations (fines may be impossible to levy in a system where agents are free to leave and rejoin again under a different identity). Instead, monitoring incentives come from (scrip) fees for accessing the services provided by the MAS. In some cases, perfect monitoring (and hence enforcement) can be achieved: no norms will be violated in equilibrium. In other cases, we show that, while it is impossible to achieve perfect enforcement, we can get arbitrarily close; we can make the probability of a norm violation in equilibrium arbitrarily small. We show using simulations that our theoretical results hold for multi-agent systems with as few as 1000 agents---the system rapidly converges to the steady-state distribution of scrip tokens necessary to ensure monitoring and then remains close to the steady state.
Our analysis of the behaviour and incentives of the token economy draws heavily on prior work on scrip systems by Friedman :06a,Kash :12a . We adopt many of their techniques, but extend their analysis to a variant model that applies to our setting. Other work has shown that changing the random volunteer procedure can improve welfare @cite_9 and that this approach still works if more than one agent must be hired to perform work @cite_6 . Work from the systems community has looked at practical details such as the efficient implementation of a token bank @cite_15 .
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_15" ], "mid": [ "2133917850", "2045690094", "136591830" ], "abstract": [ "Scrip systems provide a nonmonetary trade economy for exchange of resources. We model a scrip system as a stochastic game and study system design issues on selection rules to match potential trade partners over time. We show the optimality of one particular rule in terms of maximizing social welfare for a given scrip system that guarantees players' incentives to participate. We also investigate the optimal number of scrips to issue under this rule. In particular, if the time discount factor is close enough to one, or trade benefits one partner much more than it costs the other, the maximum social welfare is always achieved no matter how many scrips are in the system. When the benefit of trade and time discount are not sufficiently large, on the other hand, injecting more scrips in the system hurts most participants; as a result, there is an upper bound on the number of scrips allowed in the system, above which some players may default. We show that this upper bound increases with the discount factor as we...", "Scrip is a generic term for any substitute for real currency; it can be converted into goods or services sold by the issuer. In the classic scrip system model, one agent is helped by another in return for one unit of scrip. In this paper, we present an upgraded model, the one-to-n scrip system, where users need to find n agents to accomplish a single task. We provide a detailed analytical evaluation of this system based on a game-theoretic approach. We establish that a nontrivial Nash equilibrium exists in such systems under certain conditions. We study the effect of n on the equilibrium, on the distribution of scrip in the system and on its performance. Among other results, we show that the system designer should increase the average amount of scrip in the system when n increases in order to optimize its efficiency. We also explain how our new one-to-n scrip system can be applied to foster cooperation in two privacy-enhancing applications.", "Peer-to-peer systems are typically designed around the assumption that all peers will willingly contribute resources to a global pool. They thus suffer from freeloaders, that is, participants who consume many more resources than they contribute. In this paper, we propose a general economic framework for avoiding freeloaders in peer-to-peer systems. Our system works by keeping track of the resource consumption and resource contribution of each participant. The overall standing of each participant in the system is represented by a single scalar value, called their karma. A set of nodes, called a bankset, keeps track of each node’s karma, increasing it as resources are contributed, and decreasing it as they are consumed. Our framework is resistant to malicious attempts by the resource provider, consumer, and a fraction of the members of the bank set. We illustrate the application of this framework to a peer-to-peer filesharing" ] }
1602.06731
2285542994
We consider the problem of detecting norm violations in open multi-agent systems (MAS). We show how, using ideas from scrip systems, we can design mechanisms where the agents comprising the MAS are incentivised to monitor the actions of other agents for norm violations. The cost of providing the incentives is not borne by the MAS and does not come from fines charged for norm violations (fines may be impossible to levy in a system where agents are free to leave and rejoin again under a different identity). Instead, monitoring incentives come from (scrip) fees for accessing the services provided by the MAS. In some cases, perfect monitoring (and hence enforcement) can be achieved: no norms will be violated in equilibrium. In other cases, we show that, while it is impossible to achieve perfect enforcement, we can get arbitrarily close; we can make the probability of a norm violation in equilibrium arbitrarily small. We show using simulations that our theoretical results hold for multi-agent systems with as few as 1000 agents---the system rapidly converges to the steady-state distribution of scrip tokens necessary to ensure monitoring and then remains close to the steady state.
Another strand of related work is on game-theoretic models of norm emergence. Axelrod Axelrod:86a showed by means of simulations how norms could emerge given simple game rules where players punish each other for violations (and punish players who don't punish violations), and a number of norm enforcement mechanisms with good incentive properties have been analysed @cite_13 @cite_12 . Axelrod's work has been extended by Mahmoud :12a to general scenarios and to incorporate learning. Da Pinninck :10a proposed a distributed norm enforcement mechanism that uses ostracism as punishment, and showed both analytically and experimentally that it provides an upper bound on the number of norm violations.
{ "cite_N": [ "@cite_13", "@cite_12" ], "mid": [ "2056700716", "1984093725" ], "abstract": [ "The present paper extends the theory of self-enforcing agreements in a long-term relationship (the Folk Theorem in repeated games) to the situation where agents change their partners over time. Cooperation is sustained because defection against one agent causes sanction by others, and the paper shows how such a \"social norm\" is sustained by self-interested agents under various degrees of observability. Two main results are presented. The first one is an example where a community can sustain cooperation even when each agent knows nothing more than his personal experience. The second shows a Folk Theorem that the community can realize any mutually beneficial outcomes when each agent carries a label such as reputation, membership, or licence, which are revised in a systematic way.", "The paper considers the repeated prisoner's dilemma in a large-population random-matching setting where players are unable to recognize their opponents. Despite the informational restrictions cooperation is still a sequential equilibrium supported by \"contagious\" punishments. The equilibrium does not require excessive patience, and contrary to previous thought, need not be extraordinarily fragile. It is robust to the introduction of small amounts of noise and remains nearly efficient. Extensions are discussed to models with heterogeneous rates of time preference and without public randomizations." ] }
1602.06731
2285542994
We consider the problem of detecting norm violations in open multi-agent systems (MAS). We show how, using ideas from scrip systems, we can design mechanisms where the agents comprising the MAS are incentivised to monitor the actions of other agents for norm violations. The cost of providing the incentives is not borne by the MAS and does not come from fines charged for norm violations (fines may be impossible to levy in a system where agents are free to leave and rejoin again under a different identity). Instead, monitoring incentives come from (scrip) fees for accessing the services provided by the MAS. In some cases, perfect monitoring (and hence enforcement) can be achieved: no norms will be violated in equilibrium. In other cases, we show that, while it is impossible to achieve perfect enforcement, we can get arbitrarily close; we can make the probability of a norm violation in equilibrium arbitrarily small. We show using simulations that our theoretical results hold for multi-agent systems with as few as 1000 agents---the system rapidly converges to the steady-state distribution of scrip tokens necessary to ensure monitoring and then remains close to the steady state.
These approaches are distributed, but the responsibility for monitoring still lies with the normative organisation, and the cost of monitoring is borne by the MAS, either in the cost of running additional system components which monitor and regulate interactions ( @cite_2 @cite_4 ) or by paying some agents to monitor the rest ( @cite_18 ). Fagundes :14a have explored the tradeoff between the efficiency and cost of norm enforcement in stochastic environments, to identify scenarios in which monitoring can be funded by sanctions levied on violating agents while at the same time keeping the number of violations within a tolerable level. However, in an open multi-agent system, approaches in which norm enforcement is based on sanctioning ( @cite_14 @cite_0 ) may be susceptible to sybil attacks; sanctioned agents may simply leave the system and rejoin under a different id. In contrast, in our approach, the cost of monitoring is borne by the agents using the MAS. Moreover, agents cannot benefit by dropping out and rejoining the system, and monitoring is m-resilient against collusion by monitoring agents.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_0", "@cite_2" ], "mid": [ "137085984", "2125646867", "2165208522", "57397232", "2161520603" ], "abstract": [ "The subject of this paper is the cost of enforcement, to which we take a satisficing approach through the examination of marginal cost-benefit ratios. Social simulation is used to establish that less enforcement can be beneficial overall in economic terms, depending on the costs to system and or stakeholders arising from enforcement. The results are demonstrated by means of a case study of wireless mobile grids (WMGs). In such systems the dominant strategy for economically rational users is to free-ride, i.e. to benefit from the system without contributing to it. We examine the use of enforcement agents that police the system and punish users that take but do not give. The agent-based simulation shows that a certain proportion of enforcement agents increases cooperation in WMG architectures. The novelty of the results lies in our empirical evidence for the diminishing marginal utility of enforcement agents: that is how much defection they can foreclose at what cost. We show that an increase in the number of enforcement agents does not always increase the overall benefits-cost ratio, but that with respect to satisficing, a minimum proportion of enforcement agents can be identified that yields the best results.", "Social order in distributed descentralised systems is claimed to be obtained by using social norms and social control. This paper presents a normative P2P architecture to obtain social order in multi-agent systems. We propose the use of two types of norms that coexist: rules and conventions. Rules describe the global normative constraints on autonomous agents, whilst conventions are local norms. Social control is obtained by providing a non-intrusive control infrastructure that helps the agents build reputation values based on their respect of norms. Some experiments are presented that show how communities are dynamically formed and how bad agents are socially excluded.", "The design and development of open multi-agent systems (MAS) is a key aspect in agent research. We advocate that they can be realised as electronic institutions. In this paper we focus on the execution of electronic institutions by introducing AMELI, an infrastructure that mediates agentsý interactions while enforcing institutional rules. An innovative feature of AMELI is that it is of general purpose (it can interpret any institution specification), and therefore it can be regarded as domain-independent. The combination of ISLANDER [5] and AMELI provides full support for the design and development of electronic institutions.", "Due to external requirements we cannot always construct a centralized organization, but have to construct one that is distributed. A distributed organization is a network of organizations which can locally observe and control the environment. In this paper we analyze how norms can be enforced through the joint effort of the individual local organizations. Norm violations are detected by monitoring. Sanctioning compensates the violations of norms. The main problem is to map the required data for monitoring, and the required control capabilities for sanctioning, to the local observe control capabilities of organizations. Our investigation focuses on exploring the solution space of this problem, the properties of proper solutions and practical considerations when developing a solution.", "When agents make decisions, they have to deal with norms regulating the system. In this paper we therefore propose a rule-based qualitative decision and game theory combining ideas from multiagent systems and normative systems. Whereas normative systems are typically modelled as a single authority that imposes obligations and permissions on the agents, our theory is based on a multiagent structure of the normative system. We distinguish between agents whose behavior is governed by norms, so-called defender agents who have the duty to monitor violations of these norms and apply sanctions, and autonomous normative systems that issue norms and watch over the behavior of defender agents. We show that autonomous normative systems can delegate monitoring and sanctioning of violations to defender agents, when bearers of obligations model defender agents, which in turn model autonomous normative systems." ] }
1602.06359
2949989304
Matching two texts is a fundamental problem in many natural language processing tasks. An effective way is to extract meaningful matching patterns from words, phrases, and sentences to produce the matching score. Inspired by the success of convolutional neural network in image recognition, where neurons can capture many complicated patterns based on the extracted elementary visual patterns such as oriented edges and corners, we propose to model text matching as the problem of image recognition. Firstly, a matching matrix whose entries represent the similarities between words is constructed and viewed as an image. Then a convolutional neural network is utilized to capture rich matching patterns in a layer-by-layer way. We show that by resembling the compositional hierarchies of patterns in image recognition, our model can successfully identify salient signals such as n-gram and n-term matchings. Experimental results demonstrate its superiority against the baselines.
Most previous work on text matching tries to find good representations for a single text, and usually use a simple scoring function to obtain the matching results. Examples include Partial Least Square @cite_17 , Canonical Correlation Analysis @cite_20 and some deep models such as DSSM @cite_16 , CDSSM @cite_21 @cite_22 and @cite_23 .
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_23", "@cite_16", "@cite_20", "@cite_17" ], "mid": [ "", "2251008987", "2170738476", "2136189984", "29190566", "2122901787" ], "abstract": [ "", "An “Interestingness Modeler” uses deep neural networks to learn deep semantic models (DSM) of “interestingness.” The DSM, consisting of two branches of deep neural networks or their convolutional versions, identifies and predicts target documents that would interest users reading source documents. The learned model observes, identifies, and detects naturally occurring signals of interestingness in click transitions between source and target documents derived from web browser logs. Interestingness is modeled with deep neural networks that map source-target document pairs to feature vectors in a latent space, trained on document transitions in view of a “context” and optional “focus” of source and target documents. Network parameters are learned to minimize distances between source documents and their corresponding “interesting” targets in that space. The resulting interestingness model has applicable uses, including, but not limited to, contextual entity searches, automatic text highlighting, prefetching documents of likely interest, automated content recommendation, automated advertisement placement, etc.", "Semantic matching is of central importance to many natural language tasks [2,28]. A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models.", "Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper.", "We use kernel Canonical Correlation Analysis to learn a semantic representation of web images and their associated text. In the application we look at two approaches of retrieving images based only on their content from a text query. The semantic space provides a common representation and enables a comparison between the text and image. We compare the approaches against a standard cross-representation retrieval technique known as the Generalised Vector Space Model.", "We consider learning query and document similarities from a click-through bipartite graph with metadata on the nodes. The metadata contains multiple types of features of queries and documents. We aim to leverage both the click-through bipartite graph and the features to learn query-document, document-document, and query-query similarities. The challenges include how to model and learn the similarity functions based on the graph data. We propose solving the problems in a principled way. Specifically, we use two different linear mappings to project the queries and documents in two different feature spaces into the same latent space, and take the dot product in the latent space as their similarity. Query-query and document-document similarities can also be naturally defined as dot products in the latent space. We formalize the learning of similarity functions as learning of the mappings that maximize the similarities of the observed query-document pairs on the enriched click-through bipartite graph. When queries and documents have multiple types of features, the similarity function is defined as a linear combination of multiple similarity functions, each based on one type of features. We further solve the learning problem by using a new technique called Multi-view Partial Least Squares (M-PLS). The advantages include the global optimum which can be obtained through Singular Value Decomposition (SVD) and the capability of finding high quality similar queries. We conducted large scale experiments on enterprise search data and web search data. The experimental results on relevance ranking and similar query finding demonstrate that the proposed method works significantly better than the baseline methods." ] }
1602.06359
2949989304
Matching two texts is a fundamental problem in many natural language processing tasks. An effective way is to extract meaningful matching patterns from words, phrases, and sentences to produce the matching score. Inspired by the success of convolutional neural network in image recognition, where neurons can capture many complicated patterns based on the extracted elementary visual patterns such as oriented edges and corners, we propose to model text matching as the problem of image recognition. Firstly, a matching matrix whose entries represent the similarities between words is constructed and viewed as an image. Then a convolutional neural network is utilized to capture rich matching patterns in a layer-by-layer way. We show that by resembling the compositional hierarchies of patterns in image recognition, our model can successfully identify salient signals such as n-gram and n-term matchings. Experimental results demonstrate its superiority against the baselines.
Recently, a brand new approach focusing on modeling the interaction between two sentences has been proposed and gained much attention, examples include @cite_1 , @cite_19 and @cite_23 . Our model falls into this category, thus we give some detailed discussions on the differences of our model against these methods.
{ "cite_N": [ "@cite_19", "@cite_1", "@cite_23" ], "mid": [ "2103305545", "2128892113", "2170738476" ], "abstract": [ "Paraphrase detection is the task of examining two sentences and determining whether they have the same meaning. In order to obtain high accuracy on this task, thorough syntactic and semantic analysis of the two statements is needed. We introduce a method for paraphrase detection based on recursive autoencoders (RAE). Our unsupervised RAEs are based on a novel unfolding objective and learn feature vectors for phrases in syntactic trees. These features are used to measure the word- and phrase-wise similarity between two sentences. Since sentences may be of arbitrary length, the resulting matrix of similarity measures is of variable size. We introduce a novel dynamic pooling layer which computes a fixed-sized representation from the variable-sized matrices. The pooled representation is then used as input to a classifier. Our method outperforms other state-of-the-art approaches on the challenging MSRP paraphrase corpus.", "Many machine learning problems can be interpreted as learning for matching two types of objects (e.g., images and captions, users and products, queries and documents, etc.). The matching level of two objects is usually measured as the inner product in a certain feature space, while the modeling effort focuses on mapping of objects from the original space to the feature space. This schema, although proven successful on a range of matching tasks, is insufficient for capturing the rich structure in the matching process of more complicated objects. In this paper, we propose a new deep architecture to more effectively model the complicated matching relations between two objects from heterogeneous domains. More specifically, we apply this model to matching tasks in natural language, e.g., finding sensible responses for a tweet, or relevant answers to a given question. This new architecture naturally combines the localness and hierarchy intrinsic to the natural language problems, and therefore greatly improves upon the state-of-the-art models.", "Semantic matching is of central importance to many natural language tasks [2,28]. A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models." ] }
1602.06359
2949989304
Matching two texts is a fundamental problem in many natural language processing tasks. An effective way is to extract meaningful matching patterns from words, phrases, and sentences to produce the matching score. Inspired by the success of convolutional neural network in image recognition, where neurons can capture many complicated patterns based on the extracted elementary visual patterns such as oriented edges and corners, we propose to model text matching as the problem of image recognition. Firstly, a matching matrix whose entries represent the similarities between words is constructed and viewed as an image. Then a convolutional neural network is utilized to capture rich matching patterns in a layer-by-layer way. We show that by resembling the compositional hierarchies of patterns in image recognition, our model can successfully identify salient signals such as n-gram and n-term matchings. Experimental results demonstrate its superiority against the baselines.
and are both proposed based on convolutional sentence model DCNN @cite_7 . Different from which defers the interaction of two texts to the end of the process, lets them meet early by directly interleaving them to a single representation, and makes abstractions on this basis. Therefore, is capturing sentence level interactions directly. However, it is not clear what exactly the interactions are, since they used a sum operation. Our model is also based on a convolutional neural network, but the idea is quite different from that of . It is clear that we start from word level matching patterns, and compose to phrase and sentence level matching pattern layer by layer.
{ "cite_N": [ "@cite_7" ], "mid": [ "2120615054" ], "abstract": [ "The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline." ] }
1602.06245
2278918618
We propose a flexible and multi-scale method for organizing, visualizing, and understanding point cloud datasets sampled from or near stratified spaces. The first part of the algorithm produces a cover tree for a dataset using an adaptive threshold that is based on multi-scale local principal component analysis. The resulting cover tree nodes reflect the local geometry of the space and are organized via a scaffolding graph. In the second part of the algorithm, the goals are to uncover the strata that make up the underlying stratified space using a local dimension estimation procedure and topological data analysis, as well as to ultimately visualize the results in a simplified spine graph. We demonstrate our technique on several synthetic examples and then use it to visualize song structure in musical audio data.
MLPCA has been used before ( @cite_15 , @cite_0 , @cite_9 ) to analyze data, but not, to our knowledge, in combination with the cover tree.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_15" ], "mid": [ "", "1674544868", "2546490537" ], "abstract": [ "", "We introduce a method called multi-scale local shape analysis, or MLSA, for extracting features that describe the local structure of points within a dataset. The method uses both geometric and topological features at multiple levels of granularity to capture diverse types of local information for subsequent machine learning algorithms operating on the dataset. Using synthetic and real dataset examples, we demonstrate significant performance improvement of classification algorithms constructed for these datasets with correspondingly augmented features.", "Creation and selection of relevant features for machine learning applications (including image classification) is typically a process requiring significant involvement of domain knowledge. It is thus desirable to cover at least part of that process with semi-automated techniques capable of discovering and visualizing those geometric characteristics of images that are potentially relevant to the classification objective. In this work, we propose to utilize multi-scale singular value decomposition (MSVD) along with approximate nearest neighbors algorithm: both have been recently realized using the randomized approach, and can be efficiently run on large, high-dimensional datasets (sparse or dense). We apply this technique to create a multi-scale view of every point in a publicly available set of LIDAR data of riparian images, with classification objective being separating ground from vegetation. We perform “centralized MSVD” for every point and its neighborhood generated by an approximate nearest neighbor algorithm. After completion of this procedure, the original set of 3-dimensional data is augmented by 36 dimensions generated by MSVD (in three different scales), which is then processed using a novel discretization pre-processing method and the SVM classification algorithm with RBF kernel. The result is two times better that the one previously obtained (in terms of its classification error level). The generic nature of the MSVD mechanism and standard mechanisms used for classification (SVM) suggest a wider utility of the proposed approach for other problems as well." ] }
1602.06245
2278918618
We propose a flexible and multi-scale method for organizing, visualizing, and understanding point cloud datasets sampled from or near stratified spaces. The first part of the algorithm produces a cover tree for a dataset using an adaptive threshold that is based on multi-scale local principal component analysis. The resulting cover tree nodes reflect the local geometry of the space and are organized via a scaffolding graph. In the second part of the algorithm, the goals are to uncover the strata that make up the underlying stratified space using a local dimension estimation procedure and topological data analysis, as well as to ultimately visualize the results in a simplified spine graph. We demonstrate our technique on several synthetic examples and then use it to visualize song structure in musical audio data.
Many papers ( @cite_1 , for example) have analyzed stratified spaces as mixtures of general manifolds, and one @cite_4 as a union of flats. Using techniques derived mostly from Gaussian Mixture Models in the former, and Grassmanian methods in the latter, they prove theoretical guarantees about when a point belongs to a maximal stratum, but they do not say much about the singularities. Other work @cite_20 uses topological data analysis to make theoretically-backed inferences about the singularities themselves; however, the algorithms are too slow to be practical.
{ "cite_N": [ "@cite_20", "@cite_1", "@cite_4" ], "mid": [ "13637651", "2040549971", "1912587717" ], "abstract": [ "The objective of this paper is to show that point cloud data can under certain circumstances be clustered by strata in a plausible way. For our purposes, we consider a stratified space to be a collection of manifolds of different dimensions which are glued together in a locally trivial manner inside some Euclidean space. To adapt this abstract definition to the world of noise, we first define a multi-scale notion of stratified spaces, providing a stratification at different scales which are indexed by a radius parameter. We then use methods derived from kernel and cokernel persistent homology to cluster the data points into different strata. We prove a correctness guarantee for this clustering method under certain topological conditions. We then provide a probabilistic guarantee for the clustering for the point sample setting -- we provide bounds on the minimum number of sample points required to state with high probability which points belong to the same strata. Finally, we give an explicit algorithm for the clustering.", "A framework for the regularized and robust estimation of non-uniform dimensionality and density in high dimensional noisy data is introduced in this work. This leads to learning stratifications, that is, mixture of manifolds representing different characteristics and complexities in the data set. The basic idea relies on modeling the high dimensional sample points as a process of translated Poisson mixtures, with regularizing restrictions, leading to a model which includes the presence of noise. The translated Poisson distribution is useful to model a noisy counting process, and it is derived from the noise-induced translation of a regular Poisson distribution. By maximizing the log-likelihood of the process counting the points falling into a local ball, we estimate the local dimension and density. We show that the sequence of all possible local countings in a point cloud formed by samples of a stratification can be modeled by a mixture of different translated Poisson distributions, thus allowing the presence of mixed dimensionality and densities in the same data set. With this statistical model, the parameters which best describe the data, estimated via expectation maximization, divide the points in different classes according to both dimensionality and density, together with an estimation of these quantities for each class. Theoretical asymptotic results for the model are presented as well. The presentation of the theoretical framework is complemented with artificial and real examples showing the importance of regularized stratification learning in high dimensional data analysis in general and computer vision and image analysis in particular.", "We introduce a Bayesian model for inferring mixtures of subspaces of different dimensions. The key challenge in such a mixture model is specification of prior distributions over subspaces of different dimensions. We address this challenge by embedding subspaces or Grassmann manifolds into a sphere of relatively low dimension and specifying priors on the sphere. We provide an efficient sampling algorithm for the posterior distribution of the model parameters. We illustrate that a simple extension of our mixture of subspaces model can be applied to topic modeling. We also prove posterior consistency for the mixture of subspaces model. The utility of our approach is demonstrated with applications to real and simulated data." ] }
1602.06245
2278918618
We propose a flexible and multi-scale method for organizing, visualizing, and understanding point cloud datasets sampled from or near stratified spaces. The first part of the algorithm produces a cover tree for a dataset using an adaptive threshold that is based on multi-scale local principal component analysis. The resulting cover tree nodes reflect the local geometry of the space and are organized via a scaffolding graph. In the second part of the algorithm, the goals are to uncover the strata that make up the underlying stratified space using a local dimension estimation procedure and topological data analysis, as well as to ultimately visualize the results in a simplified spine graph. We demonstrate our technique on several synthetic examples and then use it to visualize song structure in musical audio data.
The cover tree was first introduced in @cite_14 as a way of performing approximate nearest neighbors in @math time for a point cloud with @math points in arbitrary metric spaces of a fixed dimension (though constants depend exponentially on the dimension). More recent work has shown that cover trees can be used for analysis of high dimensional point clouds with a low dimensional intrinsic structure @cite_16 . Similarly to @cite_16 , we use the cover tree to decompose the point cloud into geometrically simpler parts, but we encode more explicitly the stratified space structure in our representation, and we also autotune which nodes at which levels represent each part, as explained in .
{ "cite_N": [ "@cite_14", "@cite_16" ], "mid": [ "2133296809", "2229640315" ], "abstract": [ "We present a tree data structure for fast nearest neighbor operations in general n-point metric spaces (where the data set consists of n points). The data structure requires O(n) space regardless of the metric's structure yet maintains all performance properties of a navigating net (Krauthgamer & Lee, 2004b). If the point set has a bounded expansion constant c, which is a measure of the intrinsic dimensionality, as defined in (Karger & Ruhl, 2002), the cover tree data structure can be constructed in O (c6n log n) time. Furthermore, nearest neighbor queries require time only logarithmic in n, in particular O (c12 log n) time. Our experimental results show speedups over the brute force search varying between one and several orders of magnitude on natural machine learning datasets.", "Large data sets arise in a wide variety of applications and are often modeled as samples from a probability distribution in high-dimensional space. It is sometimes assumed that the support of such probability distribution is well approximated by a set of low intrinsic dimension, perhaps even a low-dimensional smooth manifold. Samples are often corrupted by high-dimensional noise. We are interested in developing tools for studying the geometry of such high-dimensional data sets. In particular, we present here a multiscale transform that maps high-dimensional data as above to a set of multiscale coefficients that are compressible sparse under suitable assumptions on the data. We think of this as a geometric counterpart to multi-resolution analysis in wavelet theory: whereas wavelets map a signal (typically low dimensional, such as a one-dimensional time series or a two-dimensional image) to a set of multiscale coefficients, the geometric wavelets discussed here map points in a high-dimensional point cloud to a multiscale set of coefficients. The geometric multi-resolution analysis (GMRA) we construct depends on the support of the probability distribution, and in this sense it fits with the paradigm of dictionary learning or data-adaptive representations, albeit the type of representation we construct is in fact mildly nonlinear, as opposed to standard linear representations. Finally, we apply the transform to a set of synthetic and real-world data sets." ] }
1602.06652
2184670865
In this thesis, we propose an artificial auditory system that gives a robot the ability to locate and track sounds, as well as to separate simultaneous sound sources and recognising simultaneous speech. We demonstrate that it is possible to implement these capabilities using an array of microphones, without trying to imitate the human auditory system. The sound source localisation and tracking algorithm uses a steered beamformer to locate sources, which are then tracked using a multi-source particle filter. Separation of simultaneous sound sources is achieved using a variant of the Geometric Source Separation (GSS) algorithm, combined with a multi-source post-filter that further reduces noise, interference and reverberation. Speech recognition is performed on separated sources, either directly or by using Missing Feature Theory (MFT) to estimate the reliability of the speech features. The results obtained show that it is possible to track up to four simultaneous sound sources, even in noisy and reverberant environments. Real-time control of the robot following a sound source is also demonstrated. The sound source separation approach we propose is able to achieve a 13.7 dB improvement in signal-to-noise ratio compared to a single microphone when three speakers are present. In these conditions, the system demonstrates more than 80 accuracy on digit recognition, higher than most human listeners could obtain in our small case study when recognising only one of these sources. All these new capabilities will allow humans to interact more naturally with a mobile robot in real life settings.
More recently, approaches using more than two microphones have been developed. One approach uses a circular array of eight microphones to locate sound sources using the MUSIC algorithm @cite_101 , a signal subspace approach. In our previous work also using eight microphones @cite_17 , we presented a method for localising a single sound source where time delay of arrival (TDOA) estimation was separated from the direction of arrival (DOA) estimation. It was found that a system combining TDOA and DOA estimation in a single step improves the system's robustness, while allowing localisation (but not tracking) of simultaneous sources @cite_100 . Kagami @cite_19 reports a system using 128 microphones for 2D sound localisation of sound sources. Similarly, Wang @cite_5 use 24 fixed microphones to track a moving robot in a room. However, it would not be practical to include such a large number of microphones on a mobile robot.
{ "cite_N": [ "@cite_101", "@cite_19", "@cite_5", "@cite_100", "@cite_17" ], "mid": [ "", "2156514726", "2012211082", "2139129402", "2104422351" ], "abstract": [ "", "This work describes two circular microphone arrays and a square microphone array which can be used for sound localization and sound capture. Sound capture by microphone array is achieved by sum and delay beam former (SDBF). A dedicated PCI 128-channel simultaneous input analog-to-digital (AD) board is developed for a 128 ch microphone array with a maximum sampling rate of 22. 7 spl mu s sample. Simulation of sound pressure distribution of 24 and 128 ch circular microphone array and 128 ch square microphone array are shown. Then a 24 ch circular microphone array and a 128 ch square microphone array have been developed. The 24 ch circular microphone array can capture sound from an arbitrary direction. The 128 ch square microphone array can capture sound from a specific point. Both systems are evaluated by using frequency components of the sound. The circular type system can be used on a mobile robot including humanoid robot and square type can be extended towards room coverage type application.", "Abstract This paper presents a method for the navigation of a mobile robot using sound localization in the context of a robotic lab tour guide. Sound localization, which is achieved using an array of 24 microphones distributed on two walls of the lab, is performed whenever the robot speaks as part of the tour. The SRP-PHAT sound localization algorithm is used to estimate the current location of the robot using approximately 2 s of recorded signal. Navigation is achieved using several stops during which the estimated location of the robot is used to make course adjustments. Experiments using the acoustic robot navigation system illustrate the accuracy of the proposed technique, which resulted in an average localization error of about 7 cm close to the array and 30 cm far away from the array.", "Mobile robots in real-life settings would benefit from being able to localize sound sources. Such a capability can nicely complement vision to help localize a person or an interesting event in the environment, and also to provide enhanced processing for other capabilities such as speech recognition. We present a robust sound source localization method in three-dimensional space using an array of 8 microphones. The method is based on a frequency-domain implementation of a steered beamformer along with a probabilistic post-processor. Results show that a mobile robot can localize in real time multiple moving sources of different types over a range of 5 meters with a response time of 200 ms.", "The hearing sense on a mobile robot is important because it is omnidirectional and it does not require direct line-of-sight with the sound source. Such capabilities can nicely complement vision to help localize a person or an interesting event in the environment. To do so the robot auditory system must be able to work in noisy, unknown and diverse environmental conditions. In this paper, we present a robust sound source localization method in three-dimensional space using an array of 8 microphones. The method is based on time delay of arrival estimation. Results show that a mobile robot can localize in real time different types of sound sources over a range of 3 meters and with a precision of 3 spl deg ." ] }
1602.06652
2184670865
In this thesis, we propose an artificial auditory system that gives a robot the ability to locate and track sounds, as well as to separate simultaneous sound sources and recognising simultaneous speech. We demonstrate that it is possible to implement these capabilities using an array of microphones, without trying to imitate the human auditory system. The sound source localisation and tracking algorithm uses a steered beamformer to locate sources, which are then tracked using a multi-source particle filter. Separation of simultaneous sound sources is achieved using a variant of the Geometric Source Separation (GSS) algorithm, combined with a multi-source post-filter that further reduces noise, interference and reverberation. Speech recognition is performed on separated sources, either directly or by using Missing Feature Theory (MFT) to estimate the reliability of the speech features. The results obtained show that it is possible to track up to four simultaneous sound sources, even in noisy and reverberant environments. Real-time control of the robot following a sound source is also demonstrated. The sound source separation approach we propose is able to achieve a 13.7 dB improvement in signal-to-noise ratio compared to a single microphone when three speakers are present. In these conditions, the system demonstrates more than 80 accuracy on digit recognition, higher than most human listeners could obtain in our small case study when recognising only one of these sources. All these new capabilities will allow humans to interact more naturally with a mobile robot in real life settings.
Most of the work so far on localisation of source sources does not address the problem of tracking moving sources. It is proposed in @cite_57 to use a Kalman filter for tracking a moving source. However the proposed method assumes that a single source is present. In the past years, particle filtering @cite_58 (a sequential Monte Carlo method) has been increasingly popular to resolve object tracking problems. Ward @cite_60 @cite_62 and Vermaak @cite_29 use this technique for tracking single sound sources. Asoh @cite_66 even suggested to use this technique for mixing audio and video data to track speakers. But again, the technique is limited to a single source due to the problem of associating the localisation observation data to each of the sources being tracked. We refer to that problem as the source-observation assignment problem. Some attempts are made at defining multi-modal particle filters in @cite_113 , and the use of particle filtering for tracking multiple targets is demonstrated in @cite_85 @cite_70 @cite_86 . But so far, the technique has not been applied to sound source tracking. Our work demonstrates that it is possible to track multiple sound sources using particle filters by solving the source-observation assignment problem.
{ "cite_N": [ "@cite_62", "@cite_60", "@cite_70", "@cite_29", "@cite_85", "@cite_113", "@cite_57", "@cite_86", "@cite_58", "@cite_66" ], "mid": [ "2107493093", "2088804976", "", "", "1552529795", "2125806710", "2137192709", "", "2160337655", "" ], "abstract": [ "Traditional acoustic source localization algorithms attempt to find the current location of the acoustic source using data collected at an array of sensors at the current time only. In the presence of strong multipath, these traditional algorithms often erroneously locate a multipath reflection rather than the true source location. A recently proposed approach that appears promising in overcoming this drawback of traditional algorithms, is a state-space approach using particle filtering. In this paper we formulate a general framework for tracking an acoustic source using particle filters. We discuss four specific algorithms that fit within this framework, and demonstrate their performance using both simulated reverberant data and data recorded in a moderately reverberant office room (with a measured reverberation time of 0.39 s). The results indicate that the proposed family of algorithms are able to accurately track a moving source in a moderately reverberant room.", "Traditional acoustic source localization uses a two-step procedure requiring intermediate time-delay estimates from pairs of microphones. An alternative single-step approach is proposed in this paper in which particle filtering is used to estimate the source location through steered beamforming. This scheme is especially attractive in speech enhancement applications, where the localization estimates are typically used to steer a beamformer at a later stage. Simulation results show that the algorithm is robust to reverberation, and is able to accurately follow the source trajectory.", "", "", "Tracking multiple targets is a challenging problem, especially when the targets are “identical”, in the sense that the same model is used to describe each target. In this case, simply instantiating several independent 1-body trackers is not an adequate solution, because the independent trackers tend to coalesce onto the best-fitting target. This paper presents an observation density for tracking which solves this problem by exhibiting a probabilistic exclusion principle. Exclusion arises naturally from a systematic derivation of the observation density, without relying on heuristics. Another important contribution of the paper is the presentation of partitioned sampling, a new sampling method for multiple object tracking. Partitioned sampling avoids the high computational load associated with fully coupled trackers, while retaining the desirable properties of coupling.", "In recent years particle filters have become a tremendously popular tool to perform tracking for nonlinear and or nonGaussian models. This is due to their simplicity, generality and success over a wide range of challenging applications. Particle filters, and Monte Carlo methods in general, are however poor at consistently maintaining the multimodality of the target distributions that may arise due to ambiguity or the presence of multiple objects. To address this shortcoming this paper proposes to model the target distribution as a nonparametric mixture model, and presents the general tracking recursion in this case. It is shown how a Monte Carlo implementation of the general recursion leads to a mixture of particle filters that interact only in the computation of the mixture weights, thus leading to an efficient numerical algorithm, where all the results pertaining to standard particle filters apply. The ability of the new method to maintain posterior multimodality is illustrated on a synthetic example and a real world tracking problem involving the tracking of football players in a video sequence.", "A system for three-dimensional passive acoustic speaker localization and tracking using a microphone array is presented and evaluated. Initial speaker position estimates are provided by a time-delay-based localization algorithm. These raw estimates are spatially smoothed by a multiple model adaptive estimator consisting of three extended Kalman filters running in parallel. The performance of the proposed system is evaluated for real data in a common office environment. The reference trajectory of the moving speaker is delivered by visually tracking a color marker on the speaker's forehead by a stereo-camera system. The proposed acoustic source tracker shows robustness and accuracy in a variety of different scenarios.", "", "Increasingly, for many application areas, it is becoming important to include elements of nonlinearity and non-Gaussianity in order to model accurately the underlying dynamics of a physical system. Moreover, it is typically crucial to process data on-line as it arrives, both from the point of view of storage costs as well as for rapid adaptation to changing signal characteristics. In this paper, we review both optimal and suboptimal Bayesian algorithms for nonlinear non-Gaussian tracking problems, with a focus on particle filters. Particle filters are sequential Monte Carlo methods based on point mass (or \"particle\") representations of probability densities, which can be applied to any state-space model and which generalize the traditional Kalman filtering methods. Several variants of the particle filter such as SIR, ASIR, and RPF are introduced within a generic framework of the sequential importance sampling (SIS) algorithm. These are discussed and compared with the standard EKF through an illustrative example.", "" ] }
1602.06652
2184670865
In this thesis, we propose an artificial auditory system that gives a robot the ability to locate and track sounds, as well as to separate simultaneous sound sources and recognising simultaneous speech. We demonstrate that it is possible to implement these capabilities using an array of microphones, without trying to imitate the human auditory system. The sound source localisation and tracking algorithm uses a steered beamformer to locate sources, which are then tracked using a multi-source particle filter. Separation of simultaneous sound sources is achieved using a variant of the Geometric Source Separation (GSS) algorithm, combined with a multi-source post-filter that further reduces noise, interference and reverberation. Speech recognition is performed on separated sources, either directly or by using Missing Feature Theory (MFT) to estimate the reliability of the speech features. The results obtained show that it is possible to track up to four simultaneous sound sources, even in noisy and reverberant environments. Real-time control of the robot following a sound source is also demonstrated. The sound source separation approach we propose is able to achieve a 13.7 dB improvement in signal-to-noise ratio compared to a single microphone when three speakers are present. In these conditions, the system demonstrates more than 80 accuracy on digit recognition, higher than most human listeners could obtain in our small case study when recognising only one of these sources. All these new capabilities will allow humans to interact more naturally with a mobile robot in real life settings.
There are several approaches to sound source separation from multiple microphones. Most fall either into the Blind Source Separation (BSS) or beamforming categories. Blind Source Separation and Independent Component Analysis (ICA) are ways to recover the original (unmixed) sources with no prior knowledge, other than the fact that all sources are statistically independent. Several criteria exist for expressing the independence of the sources, either based on information theory (e.g., Kullback-Leiber divergence) @cite_55 or based on statistics (e.g., maximum likelihood) @cite_54 . Blind source separation has been applied to audio @cite_97 @cite_13 , often in the form of frequency domain ICA @cite_25 @cite_87 @cite_94 . Recently, Araki @cite_38 have applied ICA to the separation of three sources using only two microphones.
{ "cite_N": [ "@cite_38", "@cite_87", "@cite_97", "@cite_55", "@cite_54", "@cite_13", "@cite_25", "@cite_94" ], "mid": [ "2104969945", "1484687243", "1599542129", "2285257517", "2119647652", "335504665", "2395283606", "16236365" ], "abstract": [ "In this paper, we propose a method for separating speech signals when there are more signals than sensors. Several methods have already been proposed for solving the underdetermined problem, and some of these utilize the sparseness of speech signals. These methods employ binary masks to extract the signals, and therefore, their extracted signals contain loud musical noise. To overcome this problem, we propose combining a sparseness approach and independent component analysis (ICA). First, using sparseness, we estimate the time points when only one source is active. Then, we remove this single source from the observations and apply ICA to the remaining mixtures. Experimental results show that our proposed sparseness and ICA (SPICA) method can separate signals with little distortion even in reverberant conditions of T sub R =130 and 200 ms.", "In this paper, we investigate the performance of an unmixing system obtained by frequency domain Blind Source Separation (BSS) based on Independent Component Analysis (ICA). Since ICA is based on statistics, i.e., it only attempts to make outputs independent, it is not easy to predict what is going on in a BSS system. We therefore investigate the detailed components in the processed signals of a whole BSS system by measuring four impulse responses of the system. In particular, we focuse on the direct sound and reverberation in the target and jammer signals. As a result, we reveal that the direct sound and reverberation of the jammer can be reduced compared to a null beamformer (NBF), while the reverberation of the target cannot be reduced.", "EUROSPEECH2001: the 7th European Conference on Speech Communication and Technology, September 3-7, 2001, Aalborg, Denmark.", "", "Blind signal separation (BSS) and independent component analysis (ICA) are emerging techniques of array processing and data analysis that aim to recover unobserved signals or \"sources\" from observed mixtures (typically, the output of an array of sensors), exploiting only the assumption of mutual independence between the signals. The weakness of the assumptions makes it a powerful approach, but it requires us to venture beyond familiar second order statistics, The objectives of this paper are to review some of the approaches that have been developed to address this problem, to illustrate how they stem from basic principles, and to show how they relate to each other.", "In this paper we present a novel solution to the Overdetermined Blind Speech Separation (OBSS) problem for improving speech recognition accuracy of N simultaneous speakers in real room environments using M, (M>N) microphones. The proposed OBSS system uses basic NxN Blind Speech Separation networks that process in parallel all different combinations of the available mixture signals in the frequency domain, resulting to lower computational complexity and faster convergence. Extensive experiments using an array of two to ten microphones and two simultaneous speakers in a simulated real room, showed that when the number of the microphones increases beyond two, the separation performance is improved and the phoneme recognition accuracy of an HMM based decoder increases drastically (more than 6 ). Therefore, the introduction of more microphones than speakers is justified in order to improve speech recognition accuracy in multi simultaneous speaker environments.", "", "We present a real-time version of the DUET algorithm for the blind separation of any number of sources using only two mixtures. The method applies when sources are Wdisjoint orthogonal, that is, when the supports of the windowed Fourier transform of any two signals in the mixture are disjoint sets, an assumption which is justified in the Appendix. The online algorithm is a Maximum Likelihood (ML) based gradient search method that is used to track the mixing parameters. The estimates of the mixing parameters are then used to partition the time-frequency representation of the mixtures to recover the original sources. The technique is valid even in the case when the number of sources is larger than the number of mixtures. The method was tested on mixtures generated from different voices and noises recorded from varying angles in both anechoic and echoic rooms. In total, over 1500 mixtures were tested. The average SNR gain of the demixing was 15 dB for anechoic room mixtures and 5 dB for echoic office mixtures. The algorithm runs 5 times faster than real time on a 750MHz laptop computer. Sample sound files can be found here:" ] }
1602.06652
2184670865
In this thesis, we propose an artificial auditory system that gives a robot the ability to locate and track sounds, as well as to separate simultaneous sound sources and recognising simultaneous speech. We demonstrate that it is possible to implement these capabilities using an array of microphones, without trying to imitate the human auditory system. The sound source localisation and tracking algorithm uses a steered beamformer to locate sources, which are then tracked using a multi-source particle filter. Separation of simultaneous sound sources is achieved using a variant of the Geometric Source Separation (GSS) algorithm, combined with a multi-source post-filter that further reduces noise, interference and reverberation. Speech recognition is performed on separated sources, either directly or by using Missing Feature Theory (MFT) to estimate the reliability of the speech features. The results obtained show that it is possible to track up to four simultaneous sound sources, even in noisy and reverberant environments. Real-time control of the robot following a sound source is also demonstrated. The sound source separation approach we propose is able to achieve a 13.7 dB improvement in signal-to-noise ratio compared to a single microphone when three speakers are present. In these conditions, the system demonstrates more than 80 accuracy on digit recognition, higher than most human listeners could obtain in our small case study when recognising only one of these sources. All these new capabilities will allow humans to interact more naturally with a mobile robot in real life settings.
The beamforming technique most widely used today is the Generalised Sidelobe Canceller (GSC), originally proposed by Griffiths and Jim @cite_72 . The GSC algorithm uses a fixed beamformer (delay and sum) to produce an initial estimation of the source of interest. Also, a blocking matrix is used to produce noise reference signals (that do not contain the source of interest) than can be used by a multiple-input canceller to further reduce noise at the output of the fixed beamformer. The GSC algorithm can be implemented in the frequency domain @cite_98 where its components are matrices, or in the time domain @cite_0 where the components are adaptive filters.
{ "cite_N": [ "@cite_98", "@cite_72", "@cite_0" ], "mid": [ "2147665979", "2101609516", "2099984896" ], "abstract": [ "We consider a sensor array located in an enclosure, where arbitrary transfer functions (TFs) relate the source signal and the sensors. The array is used for enhancing a signal contaminated by interference. Constrained minimum power adaptive beamforming, which has been suggested by Frost (1972) and, in particular, the generalized sidelobe canceler (GSC) version, which has been developed by Griffiths and Jim (1982), are the most widely used beamforming techniques. These methods rely on the assumption that the received signals are simple delayed versions of the source signal. The good interference suppression attained under this assumption is severely impaired in complicated acoustic environments, where arbitrary TFs may be encountered. In this paper, we consider the arbitrary TF case. We propose a GSC solution, which is adapted to the general TF case. We derive a suboptimal algorithm that can be implemented by estimating the TFs ratios, instead of estimating the TFs. The TF ratios are estimated by exploiting the nonstationarity characteristics of the desired signal. The algorithm is applied to the problem of speech enhancement in a reverberating room. The discussion is supported by an experimental study using speech and noise signals recorded in an actual room acoustics environment.", "A beamforming structure is presented which can be used to implement a wide variety of linearly constrained adaptive array processors. The structure is designed for use with arrays which have been time-delay steered such that the desired signal of interest appears approximately in phase at the steered outputs. One major advantage of the new structure is the constraints can be implemented using simple hardware differencing amplifiers. The structure is shown to incorporate algorithms which have been suggested previously for use in adaptive beamforming as well as to include new approaches. It is also particularly useful for studying the effects of steering errors on array performance. Numerical examples illustrating the performance of the structure are presented.", "This paper proposes a new robust adaptive beamformer applicable to microphone arrays. The proposed beamformer is a generalized sidelobe canceller (GSC) with a new adaptive blocking matrix using coefficient-constrained adaptive filters (CCAFs) and a multiple-input canceller with norm-constrained adaptive filters (NCAFs). The CCAFs minimize leakage of the target-signal into the interference path of the GSC. Each coefficient of the CCAFs is constrained to avoid mistracking. The input signal to all the CCAFs is the output of a fixed beamformer. In the multiple-input canceller, the NCAFs prevent undesirable target-signal cancellation when the target-signal minimization at the blocking matrix is incomplete. The proposed beamformer is shown to be robust to target-direction errors as large as 200 with almost no degradation in interference-reduction performance, and it can be implemented with several microphones. The maximum allowable target-direction error can be specified by the user. Simulated anechoic experiments demonstrate that the proposed beamformer cancels interference by over 30 dB. Simulation with real acoustic data captured in a room with 0.3-s reverberation time shows that the noise is suppressed by 19 dB. In subjective evaluation, the proposed beamformer obtains 3.8 on a five-point mean opinion score scale, which is 1.0 point higher than the conventional robust beamformer." ] }
1602.06652
2184670865
In this thesis, we propose an artificial auditory system that gives a robot the ability to locate and track sounds, as well as to separate simultaneous sound sources and recognising simultaneous speech. We demonstrate that it is possible to implement these capabilities using an array of microphones, without trying to imitate the human auditory system. The sound source localisation and tracking algorithm uses a steered beamformer to locate sources, which are then tracked using a multi-source particle filter. Separation of simultaneous sound sources is achieved using a variant of the Geometric Source Separation (GSS) algorithm, combined with a multi-source post-filter that further reduces noise, interference and reverberation. Speech recognition is performed on separated sources, either directly or by using Missing Feature Theory (MFT) to estimate the reliability of the speech features. The results obtained show that it is possible to track up to four simultaneous sound sources, even in noisy and reverberant environments. Real-time control of the robot following a sound source is also demonstrated. The sound source separation approach we propose is able to achieve a 13.7 dB improvement in signal-to-noise ratio compared to a single microphone when three speakers are present. In these conditions, the system demonstrates more than 80 accuracy on digit recognition, higher than most human listeners could obtain in our small case study when recognising only one of these sources. All these new capabilities will allow humans to interact more naturally with a mobile robot in real life settings.
Recently, a method has been proposed to combine the advantages of both the BSS and the beamforming approach. The Geometric Source Separation (GSS) approach proposed by Parra and Alvino @cite_61 uses the assumption that all sources are independent, while using information about source position through a geometric constraint. Unlike most variants of BSS, the GSS algorithm uses only second order statistics as its independence criterion, making it simpler and more robust. The GSS has been extended to take into account the Head-Related Transfer Function (HRTF) in order to improve separation @cite_109 . However, we choose not to apply this technique because of the complexity involved and the fact that it would make the system more difficult to adapt to different robots (i.e., different configurations for placing the microphones).
{ "cite_N": [ "@cite_61", "@cite_109" ], "mid": [ "2134910060", "1514122325" ], "abstract": [ "Convolutive blind source separation and adaptive beamforming have a similar goal-extracting a source of interest (or multiple sources) while reducing undesired interferences. A benefit of source separation is that it overcomes the conventional cross-talk or leakage problem of adaptive beamforming. Beamforming on the other hand exploits geometric information which is often readily available but not utilized in blind algorithms. We propose to join these benefits by combining cross-power minimization of second-order source separation with geometric linear constraints used in adaptive beamforming. We find that the geometric constraints resolve some of the ambiguities inherent in the independence criterion such as frequency permutations and degrees of freedom provided by additional sensors. We demonstrate the new method in performance comparisons for actual room recordings of two and three simultaneous acoustic sources.", "An online blind source separation algorithm which is a special case of the geometric algorithm by Parra and Fancourt [1] has been implemented for the purpose of separating sounds recorded at microphones placed at each side of the head. By using the assumption that the position of the two sounds are known, the source separation algorithm has been geometrically constrained. Since the separation takes place in a non free-field, a head-related transfer function (HRTF) is used to simulate the response between microphones placed at the two ears. The use of a HRTF instead of assuming free-field improves the separation with approximately 1 dB compared to when free-field is assumed. This indicates that the permutation ambiguity is solved more accurate compared to when free-field is assumed." ] }
1602.06652
2184670865
In this thesis, we propose an artificial auditory system that gives a robot the ability to locate and track sounds, as well as to separate simultaneous sound sources and recognising simultaneous speech. We demonstrate that it is possible to implement these capabilities using an array of microphones, without trying to imitate the human auditory system. The sound source localisation and tracking algorithm uses a steered beamformer to locate sources, which are then tracked using a multi-source particle filter. Separation of simultaneous sound sources is achieved using a variant of the Geometric Source Separation (GSS) algorithm, combined with a multi-source post-filter that further reduces noise, interference and reverberation. Speech recognition is performed on separated sources, either directly or by using Missing Feature Theory (MFT) to estimate the reliability of the speech features. The results obtained show that it is possible to track up to four simultaneous sound sources, even in noisy and reverberant environments. Real-time control of the robot following a sound source is also demonstrated. The sound source separation approach we propose is able to achieve a 13.7 dB improvement in signal-to-noise ratio compared to a single microphone when three speakers are present. In these conditions, the system demonstrates more than 80 accuracy on digit recognition, higher than most human listeners could obtain in our small case study when recognising only one of these sources. All these new capabilities will allow humans to interact more naturally with a mobile robot in real life settings.
All methods previously listed can be called Linear Source Separation (LSS) methods, in that once the demixing parameters are fixed, each output is a Linear Time Invariant (LTI) transformation from the microphone inputs. In real-life environments with background noise, reverberation and imperfect microphones, it is not possible to achieve perfect separation using LSS methods, so further noise reduction is required. Several techniques have been developed to remove background noise, including spectral subtraction @cite_50 and optimal spectral amplitude estimation @cite_53 @cite_93 @cite_26 . Techniques have also been developed specifically to reduce noise at the output of LSS algorithms, generally beamformers. Most of these post-filtering techniques address reduction of stationary background noise @cite_95 @cite_78 @cite_30 . Recently, a multi-channel post-filter taking into account non-stationary interferences was proposed by Cohen @cite_43 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_78", "@cite_53", "@cite_95", "@cite_43", "@cite_50", "@cite_93" ], "mid": [ "1994407899", "2126942983", "", "", "2099655464", "1998746641", "1888374534", "2121973264" ], "abstract": [ "This paper proposes a novel technique for estimating the signal power spectral density to be used in the transfer function of a microphone array post-filter. The technique is a modification of the existing Zelinski post-filter, which uses the auto- and cross-spectral densities of the array inputs to estimate the signal and noise spectral densities. The Zelinski technique, however, assumes zero cross-correlation between noise on different sensors. This assumption is inaccurate in real conditions, particularly at low frequencies and for arrays with closely spaced sensors. In this paper we replace this with an assumption of a theoretically diffuse noise field, which is more appropriate in a variety of realistic noise environments. In experiments using noise recordings from an office of computer workstations, the modified post-filter results in significant improvement in terms of objective speech quality measures and speech recognition performance.", "In this paper, we present an optimally-modi#ed log-spectral amplitude (OM-LSA) speech estimator and a minima controlled recursive averaging (MCRA) noise estimation approach for robust speech enhancement. The spectral gain function, which minimizes the mean-square error of the log-spectra, is obtained as a weighted geometric mean of the hypothetical gains associated with the speech presence uncertainty. The noise estimate is given by averaging past spectral power values, using a smoothing parameter that is adjusted bythe speech presence probabilityin subbands. We introduce two distinct speech presence probabilityfunctions, one for estimating the speech and one for controlling the adaptation of the noise spectrum. The former is based on the time–frequencydistribution of the a priori signal-to-noise ratio. The latter is determined bythe ratio between the local energyof the noisysignal and its minimum within a speci6ed time window. Objective and subjective evaluation under various environmental conditions con6rm the superiorityof the OM-LSA and MCRA estimators. Excellent noise suppression is achieved, while retaining weak speech components and avoiding the musical residual noise phenomena. ? 2001 Elsevier Science B.V. All rights reserved.", "", "", "The author presents a self-adapting noise reduction system which is based on a four-microphone array combined with an adaptive postfiltering scheme. Noise reduction is achieved by utilizing the directivity gain of the array and by reducing the residual noise through postfiltering of the received microphone signals. The postfiltering scheme depends on a Wiener filter estimating the desired speech signal and is computed from short-term measurements of the autocorrelation and cross-correlation functions of the microphone signals. The noise reduction system has been tested experimentally in a typical office room. The system produces an enhanced speech signal with barely noticeable residual noise if the input SNR is greater than 0 dB. The received noise power-measured in the absence of the speech signal-can be reduced by 28 dB. >", "Microphone array post-filtering allows additional reduction of noise components at a beamformer output. Existing techniques are either restricted to classical delay-and-sum beamformers, or are based on single-channel speech enhancement algorithms that are inefficient at attenuating highly non-stationary noise components. In this paper, we introduce a microphone array post-filtering approach, applicable to adaptive beamformer, that differentiates non-stationary noise components from speech components. The ratio between the transient power at the beamformer primary output and the transient power at the reference noise signals is used for indicating whether such a transient is desired or interfering. Based on a Gaussian statistical model and combined with an appropriate spectral enhancement technique, a significantly reduced level of non-stationary noise is achieved without further distorting speech components. Experimental results demonstrate the effectiveness of the proposed method.", "Spectral subtraction has been shown to be an effective approach for reducing ambient acoustic noise in order to improve the intelligibility and quality of digitally compressed speech. This paper presents a set of implementation specifications to improve algorithm performance and minimize algorithm computation and memory requirements. It is shown spectral subtraction can be implemented in terms of a nonstationary, multiplicative, frequency domain filter which changes with the time varying spectral characteristics of the speech. Using this filter a speech activity detector is defined and used to allow the algorithm to adapt automatically to changing ambient noise environments. Also the bandwidth information of this filter is used to further reduce the residual narrowband noise components which remain after spectral subtraction.", "This paper focuses on the class of speech enhancement systems which capitalize on the major importance of the short-time spectral amplitude (STSA) of the speech signal in its perception. A system which utilizes a minimum mean-square error (MMSE) STSA estimator is proposed and then compared with other widely used systems which are based on Wiener filtering and the \"spectral subtraction\" algorithm. In this paper we derive the MMSE STSA estimator, based on modeling speech and noise spectral components as statistically independent Gaussian random variables. We analyze the performance of the proposed STSA estimator and compare it with a STSA estimator derived from the Wiener estimator. We also examine the MMSE STSA estimator under uncertainty of signal presence in the noisy observations. In constructing the enhanced signal, the MMSE STSA estimator is combined with the complex exponential of the noisy phase. It is shown here that the latter is the MMSE estimator of the complex exponential of the original phase, which does not affect the STSA estimation. The proposed approach results in a significant reduction of the noise, and provides enhanced speech with colorless residual noise. The complexity of the proposed algorithm is approximately that of other systems in the discussed class." ] }
1602.06652
2184670865
In this thesis, we propose an artificial auditory system that gives a robot the ability to locate and track sounds, as well as to separate simultaneous sound sources and recognising simultaneous speech. We demonstrate that it is possible to implement these capabilities using an array of microphones, without trying to imitate the human auditory system. The sound source localisation and tracking algorithm uses a steered beamformer to locate sources, which are then tracked using a multi-source particle filter. Separation of simultaneous sound sources is achieved using a variant of the Geometric Source Separation (GSS) algorithm, combined with a multi-source post-filter that further reduces noise, interference and reverberation. Speech recognition is performed on separated sources, either directly or by using Missing Feature Theory (MFT) to estimate the reliability of the speech features. The results obtained show that it is possible to track up to four simultaneous sound sources, even in noisy and reverberant environments. Real-time control of the robot following a sound source is also demonstrated. The sound source separation approach we propose is able to achieve a 13.7 dB improvement in signal-to-noise ratio compared to a single microphone when three speakers are present. In these conditions, the system demonstrates more than 80 accuracy on digit recognition, higher than most human listeners could obtain in our small case study when recognising only one of these sources. All these new capabilities will allow humans to interact more naturally with a mobile robot in real life settings.
Research of confidence islands in the time-frequency plane representation has been shown to be effective in various applications and can be implemented with different strategies. One of the most effective is the missing feature strategy. Cooke @cite_14 @cite_3 propose a probabilistic estimation of a mask in regions of the time-frequency plane where the information is not reliable. Then, after masking, the parameters for speech recognition are generated and can be used in conventional speech recognition systems. Using this method, it is possible to obtain a significant increase in recognition rates without any modelling of the noise @cite_108 . In this scheme, the mask is essentially based on the signal-to-interference ratio (SIR) and a probabilistic estimation of the mask is used.
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_108" ], "mid": [ "203616620", "2074354966", "289804034" ], "abstract": [ "", "Abstract Human speech perception is robust in the face of a wide variety of distortions, both experimentally applied and naturally occurring. In these conditions, state-of-the-art automatic speech recognition (ASR) technology fails. This paper describes an approach to robust ASR which acknowledges the fact that some spectro-temporal regions will be dominated by noise. For the purposes of recognition, these regions are treated as missing or unreliable. The primary advantage of this viewpoint is that it makes minimal assumptions about any noise background. Instead, reliable regions are identified, and subsequent decoding is based on this evidence. We introduce two approaches for dealing with unreliable evidence. The first – marginalisation – computes output probabilities on the basis of the reliable evidence only. The second – state-based data imputation – estimates values for the unreliable regions by conditioning on the reliable parts and the recognition hypothesis. A further source of information is the bounds on the energy of any constituent acoustic source in an additive mixture. This additional knowledge can be incorporated into the missing data framework. These approaches are applied to continuous-density hidden Markov model (HMM)-based speech recognisers and evaluated on the TIDigits corpus for several noise conditions. Two criteria which use simple noise estimates are employed as a means of identifying reliable regions. The first treats regions which are negative after spectral subtraction as unreliable. The second uses the estimated noise spectrum to derive local signal-to-noise ratios, which are then thresholded to identify reliable data points. Both marginalisation and state-based data imputation produce a substantial performance advantage over spectral subtraction alone. The use of energy bounds leads to a further increase in performance for both approaches. While marginalisation outperforms data imputation, the latter technique allows the technique to act as a preprocessor for conventional recognisers, or in speech-enhancement applications.", "In this study, techniques for classification with missing or unreliable data are applied to the problem of noise-robustness in Automatic Speech Recognition (ASR). The techniques described make minimal assumptions about any noise background and rely instead on what is known about clean speech. A system is evaluated using the Aurora 2 connected digit recognition task. Using models trained on clean speech we obtain a 65 relative improvement over the Aurora clean training baseline system, a performance comparable with the Aurora baseline for multicondition training." ] }
1602.06652
2184670865
In this thesis, we propose an artificial auditory system that gives a robot the ability to locate and track sounds, as well as to separate simultaneous sound sources and recognising simultaneous speech. We demonstrate that it is possible to implement these capabilities using an array of microphones, without trying to imitate the human auditory system. The sound source localisation and tracking algorithm uses a steered beamformer to locate sources, which are then tracked using a multi-source particle filter. Separation of simultaneous sound sources is achieved using a variant of the Geometric Source Separation (GSS) algorithm, combined with a multi-source post-filter that further reduces noise, interference and reverberation. Speech recognition is performed on separated sources, either directly or by using Missing Feature Theory (MFT) to estimate the reliability of the speech features. The results obtained show that it is possible to track up to four simultaneous sound sources, even in noisy and reverberant environments. Real-time control of the robot following a sound source is also demonstrated. The sound source separation approach we propose is able to achieve a 13.7 dB improvement in signal-to-noise ratio compared to a single microphone when three speakers are present. In these conditions, the system demonstrates more than 80 accuracy on digit recognition, higher than most human listeners could obtain in our small case study when recognising only one of these sources. All these new capabilities will allow humans to interact more naturally with a mobile robot in real life settings.
A missing feature theory-based ASR uses a Hidden Markov Model (HMM) where acoustic model probabilities are modified to take into account only the reliable features. According to the work by Cooke @cite_3 , HMMs are trained on clean data. Density in each state @math is modelled using mixtures of @math Gaussians with diagonal-only covariance.
{ "cite_N": [ "@cite_3" ], "mid": [ "2074354966" ], "abstract": [ "Abstract Human speech perception is robust in the face of a wide variety of distortions, both experimentally applied and naturally occurring. In these conditions, state-of-the-art automatic speech recognition (ASR) technology fails. This paper describes an approach to robust ASR which acknowledges the fact that some spectro-temporal regions will be dominated by noise. For the purposes of recognition, these regions are treated as missing or unreliable. The primary advantage of this viewpoint is that it makes minimal assumptions about any noise background. Instead, reliable regions are identified, and subsequent decoding is based on this evidence. We introduce two approaches for dealing with unreliable evidence. The first – marginalisation – computes output probabilities on the basis of the reliable evidence only. The second – state-based data imputation – estimates values for the unreliable regions by conditioning on the reliable parts and the recognition hypothesis. A further source of information is the bounds on the energy of any constituent acoustic source in an additive mixture. This additional knowledge can be incorporated into the missing data framework. These approaches are applied to continuous-density hidden Markov model (HMM)-based speech recognisers and evaluated on the TIDigits corpus for several noise conditions. Two criteria which use simple noise estimates are employed as a means of identifying reliable regions. The first treats regions which are negative after spectral subtraction as unreliable. The second uses the estimated noise spectrum to derive local signal-to-noise ratios, which are then thresholded to identify reliable data points. Both marginalisation and state-based data imputation produce a substantial performance advantage over spectral subtraction alone. The use of energy bounds leads to a further increase in performance for both approaches. While marginalisation outperforms data imputation, the latter technique allows the technique to act as a preprocessor for conventional recognisers, or in speech-enhancement applications." ] }
1602.06652
2184670865
In this thesis, we propose an artificial auditory system that gives a robot the ability to locate and track sounds, as well as to separate simultaneous sound sources and recognising simultaneous speech. We demonstrate that it is possible to implement these capabilities using an array of microphones, without trying to imitate the human auditory system. The sound source localisation and tracking algorithm uses a steered beamformer to locate sources, which are then tracked using a multi-source particle filter. Separation of simultaneous sound sources is achieved using a variant of the Geometric Source Separation (GSS) algorithm, combined with a multi-source post-filter that further reduces noise, interference and reverberation. Speech recognition is performed on separated sources, either directly or by using Missing Feature Theory (MFT) to estimate the reliability of the speech features. The results obtained show that it is possible to track up to four simultaneous sound sources, even in noisy and reverberant environments. Real-time control of the robot following a sound source is also demonstrated. The sound source separation approach we propose is able to achieve a 13.7 dB improvement in signal-to-noise ratio compared to a single microphone when three speakers are present. In these conditions, the system demonstrates more than 80 accuracy on digit recognition, higher than most human listeners could obtain in our small case study when recognising only one of these sources. All these new capabilities will allow humans to interact more naturally with a mobile robot in real life settings.
Cooke @cite_3 propose to transform Equation to take into consideration the only reliable features from @math and to remove unreliable features. This is equivalent to use the marginalisation probability density functions @math instead of @math by simply implementing a binary mask. Consequently, only reliable features are used in the probability calculation, and the recogniser can avoid undesirable effects due to unreliable features.
{ "cite_N": [ "@cite_3" ], "mid": [ "2074354966" ], "abstract": [ "Abstract Human speech perception is robust in the face of a wide variety of distortions, both experimentally applied and naturally occurring. In these conditions, state-of-the-art automatic speech recognition (ASR) technology fails. This paper describes an approach to robust ASR which acknowledges the fact that some spectro-temporal regions will be dominated by noise. For the purposes of recognition, these regions are treated as missing or unreliable. The primary advantage of this viewpoint is that it makes minimal assumptions about any noise background. Instead, reliable regions are identified, and subsequent decoding is based on this evidence. We introduce two approaches for dealing with unreliable evidence. The first – marginalisation – computes output probabilities on the basis of the reliable evidence only. The second – state-based data imputation – estimates values for the unreliable regions by conditioning on the reliable parts and the recognition hypothesis. A further source of information is the bounds on the energy of any constituent acoustic source in an additive mixture. This additional knowledge can be incorporated into the missing data framework. These approaches are applied to continuous-density hidden Markov model (HMM)-based speech recognisers and evaluated on the TIDigits corpus for several noise conditions. Two criteria which use simple noise estimates are employed as a means of identifying reliable regions. The first treats regions which are negative after spectral subtraction as unreliable. The second uses the estimated noise spectrum to derive local signal-to-noise ratios, which are then thresholded to identify reliable data points. Both marginalisation and state-based data imputation produce a substantial performance advantage over spectral subtraction alone. The use of energy bounds leads to a further increase in performance for both approaches. While marginalisation outperforms data imputation, the latter technique allows the technique to act as a preprocessor for conventional recognisers, or in speech-enhancement applications." ] }
1602.06652
2184670865
In this thesis, we propose an artificial auditory system that gives a robot the ability to locate and track sounds, as well as to separate simultaneous sound sources and recognising simultaneous speech. We demonstrate that it is possible to implement these capabilities using an array of microphones, without trying to imitate the human auditory system. The sound source localisation and tracking algorithm uses a steered beamformer to locate sources, which are then tracked using a multi-source particle filter. Separation of simultaneous sound sources is achieved using a variant of the Geometric Source Separation (GSS) algorithm, combined with a multi-source post-filter that further reduces noise, interference and reverberation. Speech recognition is performed on separated sources, either directly or by using Missing Feature Theory (MFT) to estimate the reliability of the speech features. The results obtained show that it is possible to track up to four simultaneous sound sources, even in noisy and reverberant environments. Real-time control of the robot following a sound source is also demonstrated. The sound source separation approach we propose is able to achieve a 13.7 dB improvement in signal-to-noise ratio compared to a single microphone when three speakers are present. In these conditions, the system demonstrates more than 80 accuracy on digit recognition, higher than most human listeners could obtain in our small case study when recognising only one of these sources. All these new capabilities will allow humans to interact more naturally with a mobile robot in real life settings.
Conventional ASR usually uses Mel Frequency Cepstral Coefficients (MFCC) @cite_59 that capture the characteristics of speech. However, the missing feature mask is usually computed in the spectral domain and it is not easy to convert to the cepstral domain. Automatic generation of missing feature mask needs prior information about which spectral regions of a separated sound are distorted. This information can be obtained by our sound source separation and post-filter system @cite_75 @cite_89 . We use the post-filter gains to automatically generate the missing feature mask. Since we use a vector of 48 spectral features, the missing feature mask is a vector comprising the 48 corresponding values. The value may be discrete (1 for reliable, or 0 for unreliable) or continuous between 0 and 1.
{ "cite_N": [ "@cite_75", "@cite_59", "@cite_89" ], "mid": [ "2100820878", "2121551440", "2115839677" ], "abstract": [ "We propose a system that gives a mobile robot the ability to separate simultaneous sound sources. A microphone array is used along with a real-time dedicated implementation of geometric source separation and a post-filter that gives us a further reduction of interferences from other sources. We present results and comparisons for separation of multiple non-stationary speech sources combined with noise sources. The main advantage of our approach for mobile robots resides in the fact that both the frequency domain geometric source separation algorithm and the post-filter are able to adapt rapidly to new sources and non-stationarity. Separation results are presented for three simultaneous interfering speakers in the presence of noise. A reduction of log spectral distortion (LSD) and increase of signal-to-noise ratio (SNR) of approximately 10 dB and 14 dB are observed.", "A tutorial on signal processing in state-of-the-art speech recognition systems is presented, reviewing those techniques most commonly used. The four basic operations of signal modeling, i.e. spectral shaping, spectral analysis, parametric transformation, and statistical modeling, are discussed. Three important trends that have developed in the last five years in speech recognition are examined. First, heterogeneous parameter sets that mix absolute spectral information with dynamic, or time-derivative, spectral information, have become common. Second, similarity transform techniques, often used to normalize and decorrelate parameters in some computationally inexpensive way, have become popular. Third, the signal parameter estimation problem has merged with the speech recognition process so that more sophisticated statistical models of the signal's spectrum can be estimated in a closed-loop manner. The signal processing components of these algorithms are reviewed. >", "Microphone array post-filters have demonstrated their ability to greatly reduce noise at the output of a beamformer. However, current techniques only consider a single source of interest, most of the time assuming stationary background noise. We propose a microphone array post-filter that enhances the signals produced by the separation of simultaneous sources using common source separation algorithms. Our method is based on a loudness-domain optimal spectral estimator and on the assumption that the noise can be described as the sum of a stationary component and of a transient component that is due to leakage between the channels of the initial source separation algorithm. The system is evaluated in the context of mobile robotics and is shown to produce better results than current post-filtering techniques, greatly reducing interference while causing little distortion to the signal of interest, even at very low SNR." ] }
1602.06258
2286805788
We study the problem of searching for a hidden target in an environment that is modeled by an edge-weighted graph. A sequence of edges is chosen starting from a given root vertex such that each edge is adjacent to a previously chosen edge. This search paradigm, known as expanding search was recently introduced by Apern and Lidbetter [2013] for modeling problems such as searching for coal or minesweeping in which the cost of re-exploration is negligible. It can also be used to model a team of searchers successively splitting up in the search for a hidden adversary or explosive device, for example. We define the search ratio of an expanding search as the maximum over all vertices of the ratio of the time taken to reach the vertex and the shortest-path cost to it from the root. This can be interpreted as a measure of the multiplicative regret incurred in searching, and similar objectives have previously been studied in the context of conventional (pathwise) search. In this paper we address algorithmic and computational issues of minimizing the search ratio over all expanding searches, for a variety of search environments, including general graphs, trees and star-like graphs. Our main results focus on the problem of finding the randomized expanding search with minimum expected search ratio, which is equivalent to solving a zero-sum game between a Searcher and a Hider. We solve these problems for certain classes of graphs, and obtain constant-factor approximations for others.
Following the formalization of network search games by @cite_5 in the framework of pathwise search with un-normalized search time, the problem has had considerable attention, for example in @cite_34 , @cite_30 and @cite_32 . In the latter work the solution of the game was found for all weakly Eulerian networks. Recent variations on Gal's original game include a setting in which the Searcher chooses his own starting point in @cite_28 and @cite_15 , and the setting in which the Hider is restricted to choosing vertices that have search costs in @cite_22 and @cite_3 .
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_28", "@cite_32", "@cite_3", "@cite_5", "@cite_15", "@cite_34" ], "mid": [ "2041047721", "2089608722", "2019468657", "2068873264", "2011252431", "2013679020", "2028997157", "1985386569" ], "abstract": [ "We consider a search game for an immobile hider on one arc of the union of n graphs joined at one or two points. We evaluate a lower bound on the value of a strategy for the hider on this union. When we have identical graphs, we give the conditions under which the value of the strategy for the hider on this union is greater than or equal to n times the value of this strategy on one graph. We also solve search games on graphs, consisting of an odd number of arcs, each of length one, joining two points.", "The authors analyze two-person zero-sum search games of the following type. Play takes place on a network; the hider must choose a node and remain there while the searcher can choose the node at which he starts. To detect the hider, the searcher needs to conduct a search at the node chosen by the hider. Searching a node involves a cost which can vary from node to node. In addition to the search costs, the searcher also incurs travelling costs represented by distances on the edges. The costs are known to both players and the searcher wants to minimize his total costs. An upper bound for the value of the game is obtained and a lower bound when the network has all its edge lengths the same. Restricting attention to networks which have all their edges of the same length, the upper, and lower bounds are shown to coincide for some networks including Hamiltonian ones. Some results for the star and line networks are also given. © 2013 Wiley Periodicals, Inc. NETWORKS, 2013", "We analyze a zero-sum game between a blind unit-speed searcher and a stationary hider on a given network Q, where the payoff is the time for the searcher to reach the hider. In contrast to the standard game studied in the literature, we do not assume that the searcher has to start from a fixed point (known to the hider) but can choose his starting point. We show that for some networks, including trees, the optimal searcher, and hider strategies have a simple structure. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008", "Consider a search game with an immobile hider in a graph. A Chinese postman tour is a closed trajectory which visits all the points of the graph and has minimal length. We show that encircling the Chinese postman tour in a random direction is an optimal search strategy if and only if the graph is weakly Eulerian (i.e it consists of several Eulerian curves connected in a tree-like structure).", "The authors analyse two-person zero-sum search games in which play takes place on a network and in discrete time. The hider first chooses a node and remains there for the duration of the game. The searcher then chooses a node and searches there. At each subsequent time instant the searcher moves from the node he occupies to an adjacent node and decides whether or not to search it. Play terminates when the searcher is at the node chosen by the hider and searches there. The searcher incurs costs in moving from one node to another and also when a node is searched. The searcher wants to minimize the costs of finding the hider and the hider wants to maximize them. The paper has two main strands. The first investigates these games on directed, not necessarily strongly connected, networks; previous work has tended to concentrate on undirected networks. The second complements previous work on undirected networks by obtaining results for the case when the search cost is the same for all nodes.", "We consider search games in which the searcher moves along a continuous trajectory in a set Q until he captures the hider, where Q is either a network or a two (or more) dimensional region. We distinguish between two types of games; in the first type which is considered in the first part of the paper, the hider is immobile while in the second type of games which is considered in the rest of the paper, the hider is mobile. A complete solution is presented for some of the games, while for others only upper and lower bounds are given and some open problems associated with those games are presented for further research.", "In the (zero-sum) search game Γ(G, x) proposed by Isaacs, the Hider picks a point H in the network G and the Searcher picks a unit speed path S(t) in G with S(0) = x. The payoff to the maximizing Hider is the time T = T(S, H) = min t : S(t) = H required for the Searcher to find the Hider. An extensive theory of such games has been developed in the literature. This paper considers the related games Γ(G), where the requirement S(0) = x is dropped, and the Searcher is allowed to choose his starting point. This game has been solved by Dagan and Gal for the important case where G is a tree, and by Alpern for trees with Eulerian networks attached. Here, we extend those results to a wider class of networks, employing theory initiated by Reijnierse and Potters and completed by Gal, for the fixed-start games Γ(G, x). Our results may be more easily interpreted as determining the best worst-case method of searching a network from an arbitrary starting point.", "In this paper we discuss a type of search game with immobile hider on a graph. If the graph is weakly cyclic (i.e. no edge lies on two different cycles) we determine the value of the game and give an easy way to compute optimal strategies for the players. The value is half the length of a quasi-Euler path, i.e. a closed path of minimal length that visits all points of the network." ] }
1602.06258
2286805788
We study the problem of searching for a hidden target in an environment that is modeled by an edge-weighted graph. A sequence of edges is chosen starting from a given root vertex such that each edge is adjacent to a previously chosen edge. This search paradigm, known as expanding search was recently introduced by Apern and Lidbetter [2013] for modeling problems such as searching for coal or minesweeping in which the cost of re-exploration is negligible. It can also be used to model a team of searchers successively splitting up in the search for a hidden adversary or explosive device, for example. We define the search ratio of an expanding search as the maximum over all vertices of the ratio of the time taken to reach the vertex and the shortest-path cost to it from the root. This can be interpreted as a measure of the multiplicative regret incurred in searching, and similar objectives have previously been studied in the context of conventional (pathwise) search. In this paper we address algorithmic and computational issues of minimizing the search ratio over all expanding searches, for a variety of search environments, including general graphs, trees and star-like graphs. Our main results focus on the problem of finding the randomized expanding search with minimum expected search ratio, which is equivalent to solving a zero-sum game between a Searcher and a Hider. We solve these problems for certain classes of graphs, and obtain constant-factor approximations for others.
A specific search environment that has attracted considerable attention in the search literature is the star-like environment. More specifically, in the unbounded variant, the search domain consists of a set of infinite lines which have a common intersection point (the root of the Searcher); this problem is also known as ray searching . Ray searching is a natural generalization of the well-known linear search problem introduced independently by @cite_40 and @cite_13 (informally called the cow-path problem''). Optimal strategies for linear search under the (deterministic) competitive ratio were first given by @cite_26 . @cite_29 gave optimal strategies for the generalized problem of ray searching, a result that was rediscovered later by computer scientists (see @cite_39 ). Other related work includes the study of randomization by @cite_6 and @cite_21 ; multi-Searcher strategies by @cite_36 ; searching with turn cost by @cite_14 ; the variant in which some probabilistic information on target placement is known by @cite_35 and @cite_8 ; and the related problem of designing hybrid algorithms by @cite_18 .
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_26", "@cite_14", "@cite_8", "@cite_36", "@cite_29", "@cite_21", "@cite_6", "@cite_39", "@cite_40", "@cite_13" ], "mid": [ "", "2027345534", "2020391710", "1995434882", "2183618110", "2051092461", "2095419417", "2085497871", "2100098951", "2019729116", "1990654078", "2077296948" ], "abstract": [ "", "We study on-line strategies for solving problems with hybrid algorithms. There is a problemQandw basicalgorithms for solvingQ. For some ??w, we have a computer with ? disjoint memory areas, each of which can be used to run a basic algorithm and store its intermediate results. In the worst case, only one basic algorithm can solveQin finite time, and all of the other basic algorithms run forever without solvingQ. To solveQwith ahybridalgorithm constructed from the basic algorithms, we run a basic algorithm for some time, then switch to another, and continue this process untilQis solved. The goal is to solveQin the least amount of time. Usingcompetitive ratiosto measure the efficiency of a hybrid algorithm, we construct an optimal deterministic hybrid algorithm and an efficient randomized hybrid algorithm. This resolves an open question on searching with multiple robots posed by Baeza-Yates, Culberson, and Rawlins. We also prove that our randomized algorithm is optimal for ?=1, settling a conjecture of Kao, Reif, and Tate.", "The linear search problem has been discussed previously by one of the present authors. In this paper, the probability distribution of the point sought in the real line is not known to the searcher. Since there is noa priori choice of distribution which recommends itself above all others, we treat the situation as a game and obtain minimax type solutions. Different minimaxima apply depending on the factors which one wishes to minimize (resp. maximize). Certain criteria are developed which help the reader judge whether the results obtained can be considered “good advice” in the solution of real problems analogous to this one.", "We consider the problem of searching for an object on a line at an unknown distance OPT from the original position of the searcher, in the presence of a cost of d for each time the searcher changes direction. This is a generalization of the well-studied linear-search problem. We describe a strategy that is guaranteed to find the object at a cost of at most 9 ċ OPT + 2d, which has the optimal competitive ratio 9 with respect to OPT plus the minimum corresponding additive term. Our argument for upper and lower bound uses an infinite linear program, which we solve by experimental solution of an infinite series of approximating finite linear programs, estimating the limits, and solving the resulting recurrences for an explicit proof of optimality. We feel that this technique is interesting in its own right and should help solve other searching problems. In particular, we consider the star search or cowpath problem with turn cost, where the hidden object is placed on one of m rays emanating from the original position of the searcher. For this problem we give a tight bound of (1 + 2mm (m - 1)m-1)OPT + m((m (m - 1))m-1 - 1)d. We also discuss tradeoffs between the corresponding coefficients and we consider randomized strategies on the line.", "We extend the classic on-line search problem known as the cow-path problem to the case in which goal locations are selected according to one of a set of possible known probability distributions. We present a polynomial-time linear-programming algorithm for this problem.", "In this paper we investigate parallel searches on m concurrent rays for a point target t located at some unknown distance along one of the rays. A group of p agents or robots moving at unit speed searches for t. The search succeeds when an agent reaches the point t. Given a strategy S the competitive ratio is the ratio of the time needed by the agents to find t using S and the time needed if the location of t had been known in advance. We provide a strategy with competitive ratio of 1 + 2(m p - 1)(m (m - p))m p and prove that this is optimal. This problem has applications in multiple heuristic searches in AI as well as robot motion planning. The case p = 1 is known in the literature as the cow path problem.", "We consider several extensions of the linear search problem, treating them from the point of view of game theory. Previous results established by the author are used to find minimax solutions for these problems.", "Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as the @math -lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem, and we give the first randomized algorithms in this paper. We show that our algorithm is optimal for two paths ( @math ), and give evidence that it is indeed optimal for larger values of @math . The randomized algorithms give expected performance that is almost twice as good as is possible with a deterministic algorithm.", "We consider the problem of on-line searching on m rays. A point robot is assumed to stand at the origin of m concurrent rays one of which contains a goal g that the point robot has to find. Neither the ray containing g nor the distance to g are known to the robot. The only way the robot can detect g is by reaching its location. We use the competitive ratio as a measure of the performance of a search strategy, that is, the worst case ratio of the total distance DR traveled by the robot to find g to the distance D from the origin to g.We present a new proof of a tight lower bound of the competitive ratio for randomized strategies to search on m rays. Our proof allows us to obtain a lower bound on the optimal competitive ratio for a fixed m even if the distance of the goal to the origin is bounded from above.Finally, we show that the optimal competitive ratio converges to 1 +2(eα- 1) α2 m ∼ 1 + 2.1.544 m, for large m where α minimizes the function(ex- 1) x2.", "In this paper we initiate a new area of study dealing with the best way to search a possibly unbounded region for an object. The model for our search algorithms is that we must pay costs proportional to the distance of the next probe position relative to our current position. This model is meant to give a realistic cost measure for a robot moving in the plane. We also examine the effect of decreasing the amount of a priori information given to search problems. Problems of this type are very simple analogues of non-trivial problems on searching an unbounded region, processing digitized images, and robot navigation. We show that for some simple search problems, knowing the general direction of the goal is much more informative than knowing the distance to the goal.", "A man in an automobile searches for another man who is located at some point of a certain road. He starts at a given point and knows in advance the probability that the second man is at any given point of the road. Since the man being sought might be in either direction from the starting point, the searcher will, in general, have to turn around many times before finding his target. How does he search so as to minimize the expected distance travelled? When can this minimum expectation actually be achieved? This paper answers the second of these questions.", "" ] }
1602.06258
2286805788
We study the problem of searching for a hidden target in an environment that is modeled by an edge-weighted graph. A sequence of edges is chosen starting from a given root vertex such that each edge is adjacent to a previously chosen edge. This search paradigm, known as expanding search was recently introduced by Apern and Lidbetter [2013] for modeling problems such as searching for coal or minesweeping in which the cost of re-exploration is negligible. It can also be used to model a team of searchers successively splitting up in the search for a hidden adversary or explosive device, for example. We define the search ratio of an expanding search as the maximum over all vertices of the ratio of the time taken to reach the vertex and the shortest-path cost to it from the root. This can be interpreted as a measure of the multiplicative regret incurred in searching, and similar objectives have previously been studied in the context of conventional (pathwise) search. In this paper we address algorithmic and computational issues of minimizing the search ratio over all expanding searches, for a variety of search environments, including general graphs, trees and star-like graphs. Our main results focus on the problem of finding the randomized expanding search with minimum expected search ratio, which is equivalent to solving a zero-sum game between a Searcher and a Hider. We solve these problems for certain classes of graphs, and obtain constant-factor approximations for others.
It must be emphasized that star search has applications that are not necessarily confined to the concept of locating a target (which explains its significance and popularity). Indeed star search offers an abstraction that applies naturally in settings in which we seek an intelligent allocation of resources to tasks. More precisely, it captures decision-making aspects when the objective is to successfully complete at least one task, without knowing in advance the completion time of each task. Some concrete applications include: drilling for oil in a number of different locations in @cite_1 ; the design of efficient interruptible algorithms, i.e., algorithms that return acceptable solutions even if interrupted during their execution in @cite_37 and @cite_19 ; and database query optimization (in particular, pipelined filter ordering in @cite_10 ). We discuss the latter work in more detail in .
{ "cite_N": [ "@cite_19", "@cite_37", "@cite_1", "@cite_10" ], "mid": [ "2949341268", "1986214908", "1567718955", "" ], "abstract": [ "This paper addresses two classes of different, yet interrelated optimization problems. The first class of problems involves a robot that must locate a hidden target in an environment that consists of a set of concurrent rays. The second class pertains to the design of interruptible algorithms by means of a schedule of contract algorithms. We study several variants of these families of problems, such as searching and scheduling with probabilistic considerations, redundancy and fault-tolerance issues, randomized strategies, and trade-offs between performance and preemptions. For many of these problems we present the first known results that apply to multi-ray and multi-problem domains. Our objective is to demonstrate that several well-motivated settings can be addressed using the same underlying approach.", "Anytime algorithms offer a tradeoff between computation time and the quality of the result returned. They can be divided into two classes: contract algorithms, for which the total run time must be specified in advance, and interruptible algorithms, which can be queried at any time for a solution. An interruptible algorithm can be constructed from a contract algorithm by repeatedly activating the contract algorithm with increasing run times. The acceleration ratio of a run-time schedule is a worst-case measure of how inefficient the constructed interruptible algorithm is compared to the contract algorithm. The smallest acceleration ratio achievable on a single processor is known. Using multiple processors, smaller acceleration ratios are possible. In this paper, we provide a schedule for m processors and prove that it is optimal for all m. Our results provide general guidelines for the use of parallel processors in the design of real-time systems.", "Given n potential oil locations, where each has oil at a certain depth, we seek good trade-offs between the number of oil sources found and the total amount of drilling performed. The cost of exploring a location is proportional to the depth to which it is drilled. The algorithm has no clue about the depths of the oil sources at the different locations or even if there are any. Abstraction of the oil searching problem applies naturally to several life contexts. Consider a researcher who wants to decide which research problems to invest time into. A natural dilemma whether to invest all the time into a few problems, or share time across many problems. When you have spent a lot of time on one problem with no success, should you continue or move to another problem?", "" ] }
1602.06373
2277172025
Compressive sensing (CS) is a data acquisition technique that measures sparse or compressible signals at a sampling rate lower than their Nyquist rate. Results show that sparse signals can be reconstructed using greedy algorithms, often requiring prior knowledge such as the signal sparsity or the noise level. As a substitute to prior knowledge, cross validation (CV), a statistical method that examines whether a model overfits its data, has been proposed to determine the stopping condition of greedy algorithms. This paper first analyzes cross validation in a general compressive sensing framework and developed general cross validation techniques which could be used to understand CV-based sparse recovery algorithms. Furthermore, we provide theoretical analysis for OMP-CV, a cross validation modification of orthogonal matching pursuit, which has very good sparse recovery performance. Finally, numerical experiments are given to validate our theoretical results and investigate the behaviors of OMP-CV.
The idea of applying cross validation in compressive sensing is first proposed by Boufounos, Duarte, and Baraniuk in @cite_6 , where the general framework of CS-CV modification is founded. In a CS-CV modified algorithm, both the sensing matrix and the measurement vector are separated into the reconstruction part and the cross validation part. When the former part is utilized iteratively to construct the support set, the latter is adopted to calculate the CV residual and to determine the stopping condition. As soon as the CV residual is smaller than a given constant, its corresponding recovered signal will therefore be outputted as the reconstructed signal. The above work is the first step that introduced cross validation into the field of compressive sensing.
{ "cite_N": [ "@cite_6" ], "mid": [ "2024417168" ], "abstract": [ "Compressive sensing is a new data acquisition technique that aims to measure sparse and compressible signals at close to their intrinsic information rate rather than their Nyquist rate. Recent results in compressive sensing show that a sparse or compressible signal can be reconstructed from very few incoherent measurements. Although the sampling and reconstruction process is robust to measurement noise, all current reconstruction methods assume some knowledge of the noise power or the acquired signal to noise ratio. This knowledge is necessary to set algorithmic parameters and stopping conditions. If these parameters are set incorrectly, then the reconstruction algorithms either do not fully reconstruct the acquired signal (underfitting) or try to explain a significant portion of the noise by distorting the reconstructed signal (overfitting). This paper explores this behavior and examines the use of cross validation to determine the stopping conditions for the optimization algorithms. We demonstrate that by designating a small set of measurements as a validation set it is possible to optimize these algorithms and reduce the reconstruction error. Furthermore we explore the trade-off between using the additional measurements for cross validation instead of reconstruction." ] }
1602.06373
2277172025
Compressive sensing (CS) is a data acquisition technique that measures sparse or compressible signals at a sampling rate lower than their Nyquist rate. Results show that sparse signals can be reconstructed using greedy algorithms, often requiring prior knowledge such as the signal sparsity or the noise level. As a substitute to prior knowledge, cross validation (CV), a statistical method that examines whether a model overfits its data, has been proposed to determine the stopping condition of greedy algorithms. This paper first analyzes cross validation in a general compressive sensing framework and developed general cross validation techniques which could be used to understand CV-based sparse recovery algorithms. Furthermore, we provide theoretical analysis for OMP-CV, a cross validation modification of orthogonal matching pursuit, which has very good sparse recovery performance. Finally, numerical experiments are given to validate our theoretical results and investigate the behaviors of OMP-CV.
Another important work in CS literature related to cross validation is made by Ward @cite_10 , who cleverly used the Johnson-Lindenstrauss (JL) Lemma to evaluate the recovery status. In this work, the reconstruction matrix is used for recovering the sparse signal and the cross validation matrix is used for estimating the reconstruction error. The dependence of the desired estimation accuracy and the confidence level in the prediction on the number of CV measurements is also studied. The above work offers us a tractable way for parameter selection in sparse recovery algorithms using CV for recovery error estimation.
{ "cite_N": [ "@cite_10" ], "mid": [ "2032618720" ], "abstract": [ "Compressed sensing (CS) decoding algorithms can efficiently recover an N -dimensional real-valued vector x to within a factor of its best k-term approximation by taking m = O(klogN k) measurements y = Phix. If the sparsity or approximate sparsity level of x were known, then this theoretical guarantee would imply quality assurance of the resulting CS estimate. However, because the underlying sparsity of the signal x is unknown, the quality of a CS estimate x using m measurements is not assured. It is nevertheless shown in this paper that sharp bounds on the error ||x - x ||lN2 can be achieved with almost no effort. More precisely, suppose that a maximum number of measurements m is preimposed. One can reserve 10 log p of these m measurements and compute a sequence of possible estimates ( xj)j=1p to x from the m -10logp remaining measurements; the errors ||x - xj ||lN2 for j = 1, ..., p can then be bounded with high probability. As a consequence, numerical upper and lower bounds on the error between x and the best k-term approximation to x can be estimated for p values of k with almost no cost. This observation has applications outside CS as well." ] }
1602.06291
2275625487
Documents exhibit sequential structure at multiple levels of abstraction (e.g., sentences, paragraphs, sections). These abstractions constitute a natural hierarchy for representing the context in which to infer the meaning of words and larger fragments of text. In this paper, we present CLSTM (Contextual LSTM), an extension of the recurrent neural network LSTM (Long-Short Term Memory) model, where we incorporate contextual features (e.g., topics) into the model. We evaluate CLSTM on three specific NLP tasks: word prediction, next sentence selection, and sentence topic prediction. Results from experiments run on two corpora, English documents in Wikipedia and a subset of articles from a recent snapshot of English Google News, indicate that using both words and topics as features improves performance of the CLSTM models over baseline LSTM models for these tasks. For example on the next sentence selection task, we get relative accuracy improvements of 21 for the Wikipedia dataset and 18 for the Google News dataset. This clearly demonstrates the significant benefit of using context appropriately in natural language (NL) tasks. This has implications for a wide variety of NL applications like question answering, sentence completion, paraphrase generation, and next utterance prediction in dialog systems.
There are various approaches that try to fit a generative model for full documents. These include models that capture the content structure using Hidden Markov Models (HMMs) @cite_15 , or semantic parsing techniques to identify the underlying meanings in text segments @cite_41 . Hierarchical models have been used successfully in many applications, including hierarchical Bayesian models @cite_38 @cite_12 , hierarchical probabilistic models @cite_16 , hierarchical HMMs @cite_20 and hierarchical CRFs @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_41", "@cite_15", "@cite_16", "@cite_20", "@cite_12" ], "mid": [ "2097333614", "1981634377", "2135754437", "2118370253", "2108313375", "1636244751", "2125663122" ], "abstract": [ "We propose an approach to the problem of detecting and segmenting generic object classes that combines three \"off the shelf\" components in a novel way. The components are a generic image segmenter that returns a set of \"super pixels\" at different scales; a generic classifier that can determine if an image region (such as one or more super pixels) contains (part of) the foreground object or not; and a generic belief propagation (BP) procedure for tree-structured graphical models. Our system combines the regions together into a hierarchical, tree-structured conditional random field, applies the classifier to each node (region), and fuses all the information together using belief propagation. Since our classifiers only rely on color and texture, they can handle deformable (non-rigid) objects such as animals, even under severe occlusion and rotation. We demonstrate good results for detecting and segmenting cows, cats and cars on the very challenging Pascal VOC dataset.", "We address the technical challenges involved in combining key features from several theories of the visual cortex in a single coherent model. The resulting model is a hierarchical Bayesian network factored into modular component networks embedding variable-order Markov models. Each component network has an associated receptive field corresponding to components residing in the level directly below it in the hierarchy. The variable-order Markov models account for features that are invariant to naturally occurring transformations in their inputs. These invariant features give rise to increasingly stable, persistent representations as we ascend the hierarchy. The receptive fields of proximate components on the same level overlap to restore selectivity that might otherwise be lost to invariance.", "In this paper, we present an algorithm for learning a generative model of natural language sentences together with their formal meaning representations with hierarchical structures. The model is applied to the task of mapping sentences to hierarchical representations of their underlying meaning. We introduce dynamic programming techniques for efficient training and decoding. In experiments, we demonstrate that the model, when coupled with a discriminative reranking technique, achieves state-of-the-art performance when tested on two publicly available corpora. The generative model degrades robustly when presented with instances that are different from those seen in training. This allows a notable improvement in recall compared to previous models.", "We consider the problem of modeling the content structure of texts within a specic domain, in terms of the topics the texts address and the order in which these topics appear. We rst present an effective knowledge-lean method for learning content models from unannotated documents, utilizing a novel adaptation of algorithms for Hidden Markov Models. We then apply our method to two complementary tasks: information ordering and extractive summarization. Our experiments show that incorporating content models in these applications yields substantial improvement over previously-proposed methods.", "In this paper we present a learning based method for vessel segmentation in angiographic videos. Vessel segmentation is an important task in medical imaging and has been investigated extensively in the past. Traditional approaches often require pre-processing steps, standard conditions or manually set seed points. Our method is automatic, fast and robust towards noise often seen in low radiation X-ray images. Furthermore, it can be easily trained and used for any kind of tubular structure. We formulate the segmentation task as a hierarchical learning problem over 3 levels: border points, cross-segments and vessel pieces, corresponding to the vessel's position, width and length. Following the marginal space learning paradigm the detection on each level is performed by a learned classifier. We use probabilistic boosting trees with Haar and steerable features. First results of segmenting the vessel which surrounds a guide wire in 200 frames are presented and future additions are discussed.", "We introduce, analyze and demonstrate a recursive hierarchical generalization of the widely used hidden Markov models, which we name Hierarchical Hidden Markov Models (HHMM). Our model is motivated by the complex multi-scale structure which appears in many natural sequences, particularly in language, handwriting and speech. We seek a systematic unsupervised approach to the modeling of such structures. By extending the standard Baum-Welch (forward-backward) algorithm, we derive an efficient procedure for estimating the model parameters from unlabeled data. We then use the trained model for automatic hierarchical parsing of observation sequences. We describe two applications of our model and its parameter estimation procedure. In the first application we show how to construct hierarchical models of natural English text. In these models different levels of the hierarchy correspond to structures on different length scales in the text. In the second application we demonstrate how HHMMs can be used to automatically identify repeated strokes that represent combination of letters in cursive handwriting.", "Traditional views of visual processing suggest that early visual neurons in areas V1 and V2 are static spatiotemporal filters that extract local features from a visual scene. The extracted information is then channeled through a feedforward chain of modules in successively higher visual areas for further analysis. Recent electrophysiological recordings from early visual neurons in awake behaving monkeys reveal that there are many levels of complexity in the information processing of the early visual cortex, as seen in the long-latency responses of its neurons. These new findings suggest that activity in the early visual cortex is tightly coupled and highly interactive with the rest of the visual system. They lead us to propose a new theoretical setting based on the mathematical framework of hierarchical Bayesian inference for reasoning about the visual system. In this framework, the recurrent feedforward feedback loops in the cortex serve to integrate top-down contextual priors and bottom-up observations so as to implement concurrent probabilistic inference along the visual hierarchy. We suggest that the algorithms of particle filtering and Bayesian-belief propagation might model these interactive cortical computations. We review some recent neurophysiological evidences that support the plausibility of these ideas." ] }
1602.06291
2275625487
Documents exhibit sequential structure at multiple levels of abstraction (e.g., sentences, paragraphs, sections). These abstractions constitute a natural hierarchy for representing the context in which to infer the meaning of words and larger fragments of text. In this paper, we present CLSTM (Contextual LSTM), an extension of the recurrent neural network LSTM (Long-Short Term Memory) model, where we incorporate contextual features (e.g., topics) into the model. We evaluate CLSTM on three specific NLP tasks: word prediction, next sentence selection, and sentence topic prediction. Results from experiments run on two corpora, English documents in Wikipedia and a subset of articles from a recent snapshot of English Google News, indicate that using both words and topics as features improves performance of the CLSTM models over baseline LSTM models for these tasks. For example on the next sentence selection task, we get relative accuracy improvements of 21 for the Wikipedia dataset and 18 for the Google News dataset. This clearly demonstrates the significant benefit of using context appropriately in natural language (NL) tasks. This has implications for a wide variety of NL applications like question answering, sentence completion, paraphrase generation, and next utterance prediction in dialog systems.
As mentioned in , RNN-based language models (RNN-LMs) were proposed by @cite_28 , and the variant using LSTMs was introduced by @cite_37 -- in this paper, we work with LSTM-based LMs. @cite_24 proposed a conditional RNN-LM for adding context --- we extend this approach of using context in RNN-LMs to LSTMs.
{ "cite_N": [ "@cite_28", "@cite_37", "@cite_24" ], "mid": [ "179875071", "2402268235", "1999965501" ], "abstract": [ "A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50 reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18 reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5 on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition", "Neural networks have become increasingly popular for the task of language modeling. Whereas feed-forward networks only exploit a fixed context length to predict the next word of a sequence, conceptually, standard recurrent neural networks can take into account all of the predecessor words. On the other hand, it is well known that recurrent networks are difficult to train and therefore are unlikely to show the full potential of recurrent models. These problems are addressed by a the Long Short-Term Memory neural network architecture. In this work, we analyze this type of network on an English and a large French language modeling task. Experiments show improvements of about 8 relative in perplexity over standard recurrent neural network LMs. In addition, we gain considerable improvements in WER on top of a state-of-the-art speech recognition system.", "Recurrent neural network language models (RNNLMs) have recently demonstrated state-of-the-art performance across a variety of tasks. In this paper, we improve their performance by providing a contextual real-valued input vector in association with each word. This vector is used to convey contextual information about the sentence being modeled. By performing Latent Dirichlet Allocation using a block of preceding text, we achieve a topic-conditioned RNNLM. This approach has the key advantage of avoiding the data fragmentation associated with building multiple topic models on different data subsets. We report perplexity results on the Penn Treebank data, where we achieve a new state-of-the-art. We further apply the model to the Wall Street Journal speech recognition task, where we observe improvements in word-error-rate." ] }
1602.06291
2275625487
Documents exhibit sequential structure at multiple levels of abstraction (e.g., sentences, paragraphs, sections). These abstractions constitute a natural hierarchy for representing the context in which to infer the meaning of words and larger fragments of text. In this paper, we present CLSTM (Contextual LSTM), an extension of the recurrent neural network LSTM (Long-Short Term Memory) model, where we incorporate contextual features (e.g., topics) into the model. We evaluate CLSTM on three specific NLP tasks: word prediction, next sentence selection, and sentence topic prediction. Results from experiments run on two corpora, English documents in Wikipedia and a subset of articles from a recent snapshot of English Google News, indicate that using both words and topics as features improves performance of the CLSTM models over baseline LSTM models for these tasks. For example on the next sentence selection task, we get relative accuracy improvements of 21 for the Wikipedia dataset and 18 for the Google News dataset. This clearly demonstrates the significant benefit of using context appropriately in natural language (NL) tasks. This has implications for a wide variety of NL applications like question answering, sentence completion, paraphrase generation, and next utterance prediction in dialog systems.
Recent advances in deep learning can model hierarchical structure using deep belief networks @cite_39 @cite_25 @cite_36 @cite_44 , especially using a hierarchical recurrent neural network (RNN) framework. In Clockwork RNNs @cite_32 the hidden layer is partitioned into separate modules, each processing inputs at its own individual temporal granularity. Connectionist Temporal Classification or CTC @cite_34 does not explicitly segment the input in the hidden layer -- it instead uses a forward-backward algorithm to sum over all possible segments, and determines the normalized probability of the target sequence given the input sequence. Other approaches include a hybrid NN-HMM model @cite_48 , where the temporal dependency is handled by an HMM and the dependency between adjacent frames is handled by a neural net (NN). In this model, each node of the convolutional hidden layer corresponds to a higher-level feature.
{ "cite_N": [ "@cite_36", "@cite_48", "@cite_32", "@cite_39", "@cite_44", "@cite_34", "@cite_25" ], "mid": [ "2124151298", "2155273149", "", "2136189984", "2963576560", "2950689855", "2111369166" ], "abstract": [ "Deep unsupervised learning in stochastic recurrent neural networks with many layers of hidden units is a recent breakthrough in neural computation research. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. In this article we discuss the theoretical foundations of this approach and we review key issues related to training, testing and analysis of deep networks for modeling language and cognitive processing. The classic letter and word perception problem of McClelland and Rumelhart (1981) is used as a tutorial example to illustrate how structured and abstract representations may emerge from deep generative learning. We argue that the focus on deep architectures and generative (rather than discriminative) learning represents a crucial step forward for the connectionist modeling enterprise, because it offers a more plausible model of cortical learning as well as way to bridge the gap between emergentist connectionist models and structured Bayesian models of cognition.", "Convolutional Neural Networks (CNN) have showed success in achieving translation invariance for many image processing tasks. The success is largely attributed to the use of local filtering and max-pooling in the CNN architecture. In this paper, we propose to apply CNN to speech recognition within the framework of hybrid NN-HMM model. We propose to use local filtering and max-pooling in frequency domain to normalize speaker variance to achieve higher multi-speaker speech recognition performance. In our method, a pair of local filtering layer and max-pooling layer is added at the lowest end of neural network (NN) to normalize spectral variations of speech signals. In our experiments, the proposed CNN architecture is evaluated in a speaker independent speech recognition task using the standard TIMIT data sets. Experimental results show that the proposed CNN method can achieve over 10 relative error reduction in the core TIMIT test sets when comparing with a regular NN using the same number of hidden layers and weights. Our results also show that the best result of the proposed CNN model is better than previously published results on the same TIMIT test sets that use a pre-trained deep NN model.", "", "Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper.", "We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal-and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively.", "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.", "The chain-structured long short-term memory (LSTM) has showed to be effective in a wide range of problems such as speech recognition and machine translation. In this paper, we propose to extend it to tree structures, in which a memory cell can reflect the history memories of multiple child cells or multiple descendant cells in a recursive process. We call the model S-LSTM, which provides a principled way of considering long-distance interaction over hierarchies, e.g., language or image parse structures. We leverage the models for semantic composition to understand the meaning of text, a fundamental problem in natural language understanding, and show that it outperforms a state-of-the-art recursive model by replacing its composition layers with the S-LSTM memory blocks. We also show that utilizing the given structures is helpful in achieving a performance better than that without considering the structures." ] }
1602.06291
2275625487
Documents exhibit sequential structure at multiple levels of abstraction (e.g., sentences, paragraphs, sections). These abstractions constitute a natural hierarchy for representing the context in which to infer the meaning of words and larger fragments of text. In this paper, we present CLSTM (Contextual LSTM), an extension of the recurrent neural network LSTM (Long-Short Term Memory) model, where we incorporate contextual features (e.g., topics) into the model. We evaluate CLSTM on three specific NLP tasks: word prediction, next sentence selection, and sentence topic prediction. Results from experiments run on two corpora, English documents in Wikipedia and a subset of articles from a recent snapshot of English Google News, indicate that using both words and topics as features improves performance of the CLSTM models over baseline LSTM models for these tasks. For example on the next sentence selection task, we get relative accuracy improvements of 21 for the Wikipedia dataset and 18 for the Google News dataset. This clearly demonstrates the significant benefit of using context appropriately in natural language (NL) tasks. This has implications for a wide variety of NL applications like question answering, sentence completion, paraphrase generation, and next utterance prediction in dialog systems.
Some NN models have also used context for modeling text. Paragraph vectors @cite_3 @cite_8 propose an unsupervised algorithm that learns a latent variable from a sample of words from the context of a word, and uses the learned latent context representation as an auxiliary input to an underlying skip-gram or Continuous Bag-of-words (CBOW) model. Another model that uses the context of a word infers the Latent Dirichlet Allocation (LDA) topics of the context before a word and uses those to modify a RNN model predicting the word @cite_24 .
{ "cite_N": [ "@cite_24", "@cite_3", "@cite_8" ], "mid": [ "1999965501", "941230081", "2949547296" ], "abstract": [ "Recurrent neural network language models (RNNLMs) have recently demonstrated state-of-the-art performance across a variety of tasks. In this paper, we improve their performance by providing a contextual real-valued input vector in association with each word. This vector is used to convey contextual information about the sentence being modeled. By performing Latent Dirichlet Allocation using a block of preceding text, we achieve a topic-conditioned RNNLM. This approach has the key advantage of avoiding the data fragmentation associated with building multiple topic models on different data subsets. We report perplexity results on the Penn Treebank data, where we achieve a new state-of-the-art. We further apply the model to the Wall Street Journal speech recognition task, where we observe improvements in word-error-rate.", "Paragraph Vectors has been recently proposed as an unsupervised method for learning distributed representations for pieces of texts. In their work, the authors showed that the method can learn an embedding of movie review texts which can be leveraged for sentiment analysis. That proof of concept, while encouraging, was rather narrow. Here we consider tasks other than sentiment analysis, provide a more thorough comparison of Paragraph Vectors to other document modelling algorithms such as Latent Dirichlet Allocation, and evaluate performance of the method as we vary the dimensionality of the learned representation. We benchmarked the models on two document similarity data sets, one from Wikipedia, one from arXiv. We observe that the Paragraph Vector method performs significantly better than other methods, and propose a simple improvement to enhance embedding quality. Somewhat surprisingly, we also show that much like word embeddings, vector operations on Paragraph Vectors can perform useful semantic results.", "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks." ] }
1602.06291
2275625487
Documents exhibit sequential structure at multiple levels of abstraction (e.g., sentences, paragraphs, sections). These abstractions constitute a natural hierarchy for representing the context in which to infer the meaning of words and larger fragments of text. In this paper, we present CLSTM (Contextual LSTM), an extension of the recurrent neural network LSTM (Long-Short Term Memory) model, where we incorporate contextual features (e.g., topics) into the model. We evaluate CLSTM on three specific NLP tasks: word prediction, next sentence selection, and sentence topic prediction. Results from experiments run on two corpora, English documents in Wikipedia and a subset of articles from a recent snapshot of English Google News, indicate that using both words and topics as features improves performance of the CLSTM models over baseline LSTM models for these tasks. For example on the next sentence selection task, we get relative accuracy improvements of 21 for the Wikipedia dataset and 18 for the Google News dataset. This clearly demonstrates the significant benefit of using context appropriately in natural language (NL) tasks. This has implications for a wide variety of NL applications like question answering, sentence completion, paraphrase generation, and next utterance prediction in dialog systems.
Tree-structured LSTMs @cite_19 @cite_25 extend chain-structured LSTMs to the tree structure and propose a principled approach of considering long-distance interaction over hierarchies, e.g., language or image parse structures. Convolution networks have been used for multi-level text understanding, starting from character-level inputs all the way to abstract text concepts @cite_17 . Skip thought vectors have also been used to train an encoder-decoder model that tries to reconstruct the surrounding sentences of an encoded passage @cite_11 .
{ "cite_N": [ "@cite_19", "@cite_11", "@cite_25", "@cite_17" ], "mid": [ "2104246439", "", "2111369166", "1775434803" ], "abstract": [ "Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).", "", "The chain-structured long short-term memory (LSTM) has showed to be effective in a wide range of problems such as speech recognition and machine translation. In this paper, we propose to extend it to tree structures, in which a memory cell can reflect the history memories of multiple child cells or multiple descendant cells in a recursive process. We call the model S-LSTM, which provides a principled way of considering long-distance interaction over hierarchies, e.g., language or image parse structures. We leverage the models for semantic composition to understand the meaning of text, a fundamental problem in natural language understanding, and show that it outperforms a state-of-the-art recursive model by replacing its composition layers with the S-LSTM memory blocks. We also show that utilizing the given structures is helpful in achieving a performance better than that without considering the structures.", "This article demontrates that we can apply deep learning to text understanding from character-level inputs all the way up to abstract text concepts, using temporal convolutional networks (ConvNets). We apply ConvNets to various large-scale datasets, including ontology classification, sentiment analysis, and text categorization. We show that temporal ConvNets can achieve astonishing performance without the knowledge of words, phrases, sentences and any other syntactic or semantic structures with regards to a human language. Evidence shows that our models can work for both English and Chinese." ] }
1602.06291
2275625487
Documents exhibit sequential structure at multiple levels of abstraction (e.g., sentences, paragraphs, sections). These abstractions constitute a natural hierarchy for representing the context in which to infer the meaning of words and larger fragments of text. In this paper, we present CLSTM (Contextual LSTM), an extension of the recurrent neural network LSTM (Long-Short Term Memory) model, where we incorporate contextual features (e.g., topics) into the model. We evaluate CLSTM on three specific NLP tasks: word prediction, next sentence selection, and sentence topic prediction. Results from experiments run on two corpora, English documents in Wikipedia and a subset of articles from a recent snapshot of English Google News, indicate that using both words and topics as features improves performance of the CLSTM models over baseline LSTM models for these tasks. For example on the next sentence selection task, we get relative accuracy improvements of 21 for the Wikipedia dataset and 18 for the Google News dataset. This clearly demonstrates the significant benefit of using context appropriately in natural language (NL) tasks. This has implications for a wide variety of NL applications like question answering, sentence completion, paraphrase generation, and next utterance prediction in dialog systems.
Other related work include Document Context Language models @cite_22 , where the authors have multi-level recurrent neural network language models that incorporate context from within a sentence and from previous sentences. @cite_35 use a hierarchical RNN structure for document-level as well as sentence-level modeling -- they evaluate their models using word prediction perplexity, as well as an approach of coherence evaluation by trying to predict sentence-level ordering in a document.
{ "cite_N": [ "@cite_35", "@cite_22" ], "mid": [ "2251849926", "2197913429" ], "abstract": [ "This paper proposes a novel hierarchical recurrent neural network language model (HRNNLM) for document modeling. After establishing a RNN to capture the coherence between sentences in a document, HRNNLM integrates it as the sentence history information into the word level RNN to predict the word sequence with cross-sentence contextual information. A two-step training approach is designed, in which sentence-level and word-level language models are approximated for the convergence in a pipeline style. Examined by the standard sentence reordering scenario, HRNNLM is proved for its better accuracy in modeling the sentence coherence. And at the word level, experimental results also indicate a significant lower model perplexity, followed by a practical better translation result when applied to a Chinese-English document translation reranking task.", "Text documents are structured on multiple levels of detail: individual words are related by syntax, but larger units of text are related by discourse structure. Existing language models generally fail to account for discourse structure, but it is crucial if we are to have language models that reward coherence and generate coherent texts. We present and empirically evaluate a set of multi-level recurrent neural network language models, called Document-Context Language Models (DCLM), which incorporate contextual information both within and beyond the sentence. In comparison with word-level recurrent neural network language models, the DCLM models obtain slightly better predictive likelihoods, and considerably better assessments of document coherence." ] }
1602.05875
2280236323
Traditional convolutional layers extract features from patches of data by applying a non-linearity on an affine function of the input. We propose a model that enhances this feature extraction process for the case of sequential data, by feeding patches of the data into a recurrent neural network and using the outputs or hidden states of the recurrent units to compute the extracted features. By doing so, we exploit the fact that a window containing a few frames of the sequential data is a sequence itself and this additional structure might encapsulate valuable information. In addition, we allow for more steps of computation in the feature extraction process, which is potentially beneficial as an affine function followed by a non-linearity can result in too simple features. Using our convolutional recurrent layers we obtain an improvement in performance in two audio classification tasks, compared to traditional convolutional layers. Tensorflow code for the convolutional recurrent layers is publicly available in this https URL
Few other works attempt to extract features from patches of data by using the hidden states of the recurrent layer. In @cite_28 the authors extract features from an image path by using a recurrent layer that takes at every time step as input this image path and the output of the last time step. This approach can result in better features as creating them using a recurrent network is a more complicated process (i.e., with more steps of computation), but still, there is no use of a sequential nature of the data patches themselves, and the process can be very computationally expensive as the same path is fed over and over to the recurrent net.
{ "cite_N": [ "@cite_28" ], "mid": [ "1934184906" ], "abstract": [ "In recent years, the convolutional neural network (CNN) has achieved great success in many computer vision tasks. Partially inspired by neuroscience, CNN shares many properties with the visual system of the brain. A prominent difference is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant. Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent connections into each convolutional layer. Though the input is static, the activities of RCNN units evolve over time so that the activity of each unit is modulated by the activities of its neighboring units. This property enhances the ability of the model to integrate the context information, which is important for object recognition. Like other recurrent neural networks, unfolding the RCNN through time can result in an arbitrarily deep network with a fixed number of parameters. Furthermore, the unfolded network has multiple paths, which can facilitate the learning process. The model is tested on four benchmark object recognition datasets: CIFAR-10, CIFAR-100, MNIST and SVHN. With fewer trainable parameters, RCNN outperforms the state-of-the-art models on all of these datasets. Increasing the number of parameters leads to even better performance. These results demonstrate the advantage of the recurrent structure over purely feed-forward structure for object recognition." ] }
1602.05891
2281536697
With the increasing usage of JavaScript in web applications, there is a great demand to write JavaScript code that is reliable and maintainable. To achieve these goals, classes can be emulated in the current JavaScript standard version. In this paper, we propose a reengineering tool to identify such class-like structures and to create an object-oriented model based on JavaScript source code. The tool has a parser that loads the AST (Abstract Syntax Tree) of a JavaScript application to model its structure. It is also integrated with the Moose platform to provide powerful visualization, e.g., UML diagram and Distribution Maps, and well-known metric values for software analysis. We also provide some examples with real JavaScript applications to evaluate the tool.
There is an increasing interest on JavaScript software engineering research. For example, JSNose is a tool for detecting code smells based on a combination of static and dynamic analysis @cite_0 . One of the code smells detected by JSNose, Refused Bequest, refers to subclasses that use only some of the methods and properties inherited from its parents. Differently, JSClassFinder provides models, visualizations, and metrics about the object-oriented portion of a JavaScript system, including inheritance relationships. Although JSClassFinder is not specifically designed for code smells detection, information provided by our tool can be used for this purpose.
{ "cite_N": [ "@cite_0" ], "mid": [ "2007425631" ], "abstract": [ "JavaScript is a powerful and flexible prototype-based scripting language that is increasingly used by developers to create interactive web applications. The language is interpreted, dynamic, weakly-typed, and has first-class functions. In addition, it interacts with other web languages such as CSS and HTML at runtime. All these characteristics make JavaScript code particularly error-prone and challenging to write and maintain. Code smells are patterns in the source code that can adversely influence program comprehension and maintainability of the program in the long term. We propose a set of 13 JavaScript code smells, collected from various developer resources. We present a JavaScript code smell detection technique called JSNOSE. Our metric-based approach combines static and dynamic analysis to detect smells in client-side code. This automated technique can help developers to spot code that could benefit from refactoring. We evaluate the smell finding capabilities of our technique through an empirical study. By analyzing 11 web applications, we investigate which smells detected by JSNOSE are more prevalent." ] }
1602.05314
2284646714
Is it possible to determine the location of a photo from just its pixels? While the general problem seems exceptionally difficult, photos often contain cues such as landmarks, weather patterns, vegetation, road markings, or architectural details, which in combination allow to infer where the photo was taken. Previously, this problem has been approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, this model achieves a 50 performance improvement over the single-image model.
Given a query photo, Im2GPS @cite_28 @cite_7 retrieves similar images from millions of geotagged Flickr photos and assigns the location of the closest match to the query. Image distances are computed using a combination of global image descriptors. Im2GPS shows that with enough data, even this simple approach can achieve surprisingly good results. We discuss Im2GPS in detail in Sec. .
{ "cite_N": [ "@cite_28", "@cite_7" ], "mid": [ "2103163130", "1905312050" ], "abstract": [ "Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally - on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earthpsilas surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban rural classification.", "In this chapter, we explore the task of global image geolocalization—estimating where on the Earth a photograph was captured. We examine variants of the “im2gps” algorithm using millions of “geotagged” Internet photographs as training data. We first discuss a simple to understand nearest-neighbor baseline. Next, we introduce a lazy-learning approach with more sophisticated features that doubles the performance of the original “im2gps” algorithm. Beyond quantifying geolocalization accuracy, we also analyze (a) how the nonuniform distribution of training data impacts the algorithm (b) how performance compares to baselines such as random guessing and land-cover recognition and (c) whether geolocalization is simply landmark or “instance level” recognition at a large scale. We also show that geolocation estimates can provide the basis for image understanding tasks such as population density estimation or land cover estimation. This work was originally described, in part, in “im2gps” [9] which was the first attempt at global geolocalization using Internet-derived training data." ] }
1602.05314
2284646714
Is it possible to determine the location of a photo from just its pixels? While the general problem seems exceptionally difficult, photos often contain cues such as landmarks, weather patterns, vegetation, road markings, or architectural details, which in combination allow to infer where the photo was taken. Previously, this problem has been approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, this model achieves a 50 performance improvement over the single-image model.
Because photo coverage in rural areas is sparse, @cite_38 @cite_23 make additional use of satellite aerial imagery. @cite_23 use CNNs to learn a joint embedding for ground and aerial images and localize a query image by matching it against a database of aerial images. @cite_24 take a similar approach and use a CNN to transform ground-level features to the feature space of aerial images.
{ "cite_N": [ "@cite_24", "@cite_38", "@cite_23" ], "mid": [ "2199890863", "2081418428", "1946093182" ], "abstract": [ "We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to georeferenced aerial images. We use state-of-the-art feature representations for ground-level images and introduce a cross-view training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and ground-level images from across the United States. Our methods significantly out-perform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales.", "The recent availability of large amounts of geotagged imagery has inspired a number of data driven solutions to the image geolocalization problem. Existing approaches predict the location of a query image by matching it to a database of georeferenced photographs. While there are many geotagged images available on photo sharing and street view sites, most are clustered around landmarks and urban areas. The vast majority of the Earth's land area has no ground level reference photos available, which limits the applicability of all existing image geolocalization methods. On the other hand, there is no shortage of visual and geographic data that densely covers the Earth - we examine overhead imagery and land cover survey data - but the relationship between this data and ground level query photographs is complex. In this paper, we introduce a cross-view feature translation approach to greatly extend the reach of image geolocalization methods. We can often localize a query even if it has no corresponding ground level images in the database. A key idea is to learn the relationship between ground level appearance and overhead appearance and land cover attributes from sparsely available geotagged ground-level images. We perform experiments over a 1600 km2 region containing a variety of scenes and land cover types. For each query, our algorithm produces a probability density over the region of interest.", "The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations." ] }