aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1505.01749
2952215797
We propose an object detection system that relies on a multi-region deep convolutional neural network (CNN) that also encodes semantic segmentation-aware features. The resulting CNN-based representation aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization. We exploit the above properties of our recognition module by integrating it on an iterative localization mechanism that alternates between scoring a box proposal and refining its location with a deep CNN regression model. Thanks to the efficient use of our modules, we detect objects with very high localization accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we achieve mAP of 78.2 and 73.9 correspondingly, surpassing any other published work by a significant margin.
Contemporary to our work are the approaches of @cite_3 @cite_26 @cite_33 that are also based on the SPP-Net framework. On @cite_3 , they improve the SPP framework by replacing the sub-network component that is applied on the convolutional features extracted from the whole image with a deeper convolutional network. On @cite_26 , they focus on simplifying the training phase of SPP-Net and R-CNN and speeding up both the testing and the training phases. Also, by fine-tuning the whole network and adopting a multi-task objective that has both box classification loss and box regression loss, they manage to improve the accuracy of their system. Finally, on @cite_33 they extend @cite_26 by adding a new sub-network component for predicting class-independent proposals and thus making the system both faster and independent of object proposal algorithms.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_3" ], "mid": [ "", "2953106684", "2952009708" ], "abstract": [ "", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "Most object detectors contain two important components: a feature extractor and an object classifier. The feature extractor has rapidly evolved with significant research efforts leading to better deep convolutional architectures. The object classifier, however, has not received much attention and many recent systems (like SPPnet and Fast Faster R-CNN) use simple multi-layer perceptrons. This paper demonstrates that carefully designing deep networks for object classification is just as important. We experiment with region-wise classifier networks that use shared, region-independent convolutional features. We call them \"Networks on Convolutional feature maps\" (NoCs). We discover that aside from deep feature maps, a deep and convolutional per-region classifier is of particular importance for object detection, whereas latest superior image classification models (such as ResNets and GoogLeNets) do not directly lead to good detection accuracy without using such a per-region classifier. We show by experiments that despite the effective ResNets and Faster R-CNN systems, the design of NoCs is an essential element for the 1st-place winning entries in ImageNet and MS COCO challenges 2015." ] }
1505.01662
2952774715
Hereditarily finite (HF) set theory provides a standard universe of sets, but with no infinite sets. Its utility is demonstrated through a formalisation of the theory of regular languages and finite automata, including the Myhill-Nerode theorem and Brzozowski's minimisation algorithm. The states of an automaton are HF sets, possibly constructed by product, sum, powerset and similar operations.
There is a great body of prior work. One approach involves working constructively, in some sort of type theory. Constable's group has formalised automata @cite_5 in Nuprl, including the Myhill-Nerode theorem. Using type theory in the form of Coq and its Ssreflect library, @cite_1 formalise much of the same material as the present paper. They omit @math -transitions and Brzozowski's algorithm and add the pumping lemma and Kleene's algorithm for translating a DFA to a regular expression. Their development is of a similar length, under 1400 lines, and they allow the states of a finite automaton to be given by any finite type. In a substantial development, Braibant and Pous @cite_13 have implemented a tactic for solving equations in Kleene algebras by implementing efficient finite automata algorithms in Coq. They represent states by integers.
{ "cite_N": [ "@cite_13", "@cite_5", "@cite_1" ], "mid": [ "2963188972", "1598988887", "" ], "abstract": [ "We present a reflexive tactic for deciding the equational theory of Kleene algebras in the Coq proof assistant. This tactic relies on a careful implementation of efficient finite automata algorithms, so that it solves casual equations instantaneously and properly scales to larger expressions. The decision procedure is proved correct and complete: correctness is established w.r.t. any model by formalising Kozen's initiality theorem; a counter-example is returned when the given equation does not hold. The correctness proof is challenging: it involves both a precise analysis of the underlying automata algorithms and a lot of algebraic reasoning. In particular, we have to formalise the theory of matrices over a Kleene algebra. We build on the recent addition of firstorder typeclasses in Coq in order to work efficiently with the involved algebraic structures.", "Adhesive dressing compositions are disclosed which are useful in restoring and improving adhesive joint surfaces such as grout surfaces in ceramic tile installations by easy, efficient methods. Such compositions form stain resistant, water repellent, washable coverings which adhere to most adhesive surfaces and further have properties of preferential adherability to certain adhesive surfaces compared with adjacent adherend surfaces. The compositions comprise a polymer in the form of an emulsion, an alkali-thickenable polymer, an alkaline material and water with other components including pigments, plasticizers and solvents.", "" ] }
1505.01662
2952774715
Hereditarily finite (HF) set theory provides a standard universe of sets, but with no infinite sets. Its utility is demonstrated through a formalisation of the theory of regular languages and finite automata, including the Myhill-Nerode theorem and Brzozowski's minimisation algorithm. The states of an automaton are HF sets, possibly constructed by product, sum, powerset and similar operations.
Recent Isabelle developments explicitly bypass automata theory. @cite_2 prove the Myhill-Nerode theorem using regular expressions. This is a significant feat, especially considering that the theorem's underlying intuitions come from automata. Current work on regular expression equivalence @cite_10 @cite_0 continues to focus on regular expressions rather than finite automata.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_2" ], "mid": [ "", "1968602591", "2054695370" ], "abstract": [ "", "We describe and verify an elegant equivalence checker for regular expressions. It works by constructing a bisimulation relation between (derivatives of) regular expressions. By mapping regular expressions to binary relations, an automatic and complete proof method for (in)equalities of binary relations over union, composition and (reflexive) transitive closure is obtained. The verification is carried out in the theorem prover Isabelle HOL, yielding a practically useful decision procedure.", "There are numerous textbooks on regular languages. Many of them focus on finite automata for proving properties. Unfortunately, automata are not so straightforward to formalise in theorem provers. The reason is that natural representations for automata are graphs, matrices or functions, none of which are inductive datatypes. Regular expressions can be defined straightforwardly as a datatype and a corresponding reasoning infrastructure comes for free in theorem provers. We show in this paper that a central result from formal language theory--the Myhill-Nerode Theorem--can be recreated using only regular expressions. From this theorem many closure properties of regular languages follow." ] }
1505.01151
2245427241
Plausibility measures are structures for reasoning in the face of uncertainty that generalize probabilities, unifying them with weaker structures like possibility measures and comparative probability relations. So far, the theory of plausibility measures has only been developed for classical sample spaces. In this paper, we generalize the theory to test spaces, so that they can be applied to general operational theories, and to quantum theory in particular. Our main results are two theorems on when a plausibility measure agrees with a probability measure, i.e. when its comparative relations coincide with those of a probability measure. For strictly finite test spaces we obtain a precise analogue of the classical result that the Archimedean condition is necessary and sufficient for agreement between a plausibility and a probability measure. In the locally finite case, we prove a slightly weaker result that the Archimedean condition implies almost agreement.
On the quantum side, Foulis, Randall and Piron investigated the notion of on test spaces @cite_28 , which in our terminology are just possibility measures on test spaces. A plausibility measure on a test space is a generalization of this, and can be viewed as a way of unifying it with the usual notion of a probability measure, or state, on a test space.
{ "cite_N": [ "@cite_28" ], "mid": [ "2013598141" ], "abstract": [ "Attempts to derive the Born rule, either in the Many Worlds or Copenhagen interpretation, are unsatisfactory for systems with only a finite number of degrees of freedom. In the case of Many Worlds this is a serious problem, since its goal is to account for apparent collapse phenomena, including the Born rule for probabilities, assuming only unitary evolution of the wavefunction. For finite number of degrees of freedom, observers on the vast majority of branches would not deduce the Born rule. However, discreteness of the quantum state space, even if extremely tiny, may restore the validity of the usual arguments." ] }
1505.01181
2950931348
How would a cellular network designed for maximal energy efficiency look like? To answer this fundamental question, tools from stochastic geometry are used in this paper to model future cellular networks and obtain a new lower bound on the average uplink spectral efficiency. This enables us to formulate a tractable uplink energy efficiency (EE) maximization problem and solve it analytically with respect to the density of base stations (BSs), the transmit power levels, the number of BS antennas and users per cell, and the pilot reuse factor. The closed-form expressions obtained from this general EE maximization framework provide valuable insights on the interplay between the optimization variables, hardware characteristics, and propagation environment. Small cells are proved to give high EE, but the EE improvement saturates quickly with the BS density. Interestingly, the maximal EE is achieved by also equipping the BSs with multiple antennas and operate in a "massive MIMO" fashion, where the array gain from coherent detection mitigates interference and the multiplexing of many users reduces the energy cost per user.
The EE analysis of multi-cellular networks is much more involved than in the single-cell case due to the complicated network topology and the arising inter-cell interference. The simplest approach is to rely on heavy Monte Carlo simulations. Attempts in this direction can be found in @cite_36 and @cite_37 . Unfortunately, Monte Carlo simulated results are often anecdotal since one cannot separate fundamental properties from behaviors induced by parameter selection. Alternatively, simplified network topologies can be considered, such as the Wyner model @cite_31 or the symmetric grid-based deployment @cite_42 . While attractive for its analytical simplicity, the Wyner model does not well capture the essential characteristics of real and practical networks @cite_7 . Similarly, the symmetric grid-based models cannot capture the irregular structure of small-cell network deployments.
{ "cite_N": [ "@cite_37", "@cite_7", "@cite_36", "@cite_42", "@cite_31" ], "mid": [ "2072339605", "2131070905", "2076138763", "2147056723", "2156315971" ], "abstract": [ "Energy efficiency (EE) is becoming an important design goal for wireless communication systems providing high spectral efficiency (SE). Both massive multi-input multi-output (MIMO) and small cell network (SCN) are expected to achieve high EE for high throughput cellular networks, though using different mechanisms. Massive MIMO improves EE by exploiting a large array gain, while SCN improves EE by deploying a large number of low-power base stations (BSs) to reduce the propagation loss and increase the opportunity of BS sleep. In this paper, we compare the EEs as well as the SEs of Massive MIMO and SCN. For a fair comparison, we consider a multi-cell network with the same user density, antenna density and average cell-edge signal-to-noise-ratio (SNR). Perfect channel information is assumed, and three BS sleep strategies are considered. Our analysis shows that the EE of SCN increases with the cell size shrinking, and the achievable SEs of SCN and Massive MIMO increase with the cell-edge SNR. When the number of cells is large, SCN is always more energy efficient than Massive MIMO. On the other hand, when the number of cells is small, Massive MIMO achieves higher EE than SCN when the circuit power consumptions of Massive MIMO are much lower than SCN.", "The Wyner model has been widely used to model and analyze cellular networks due to its simplicity and analytical tractability. Its key aspects include fixed user locations and the deterministic and homogeneous interference intensity. While clearly a significant simplification of a real cellular system, which has random user locations and interference levels that vary by several orders of magnitude over a cell, a common presumption by theorists is that the Wyner model nevertheless captures the essential aspects of cellular interactions. But is this true? To answer this question, we compare the Wyner model to a model that includes random user locations and fading. We consider both uplink and downlink transmissions and both outage-based and average-based metrics. For the uplink, for both metrics, we conclude that the Wyner model is in fact quite accurate for systems with a sufficient number of simultaneous users, e.g., a CDMA system. Conversely, it is broadly inaccurate otherwise. Turning to the downlink, the Wyner model becomes inaccurate even for systems with a large number of simultaneous users. In addition, we derive an approximation for the main parameter in the Wyner model - the interference intensity term, which depends on the path loss exponent.", "The energy consumption of different cellular network architectures are analyzed. In particular, a comparison of the transmit energy consumption between a single large cell with multiple co-located antennas, multiple micro-cells with a single antenna at each cell, and a large cell with a distributed antenna system are presented. The influence of different system parameters such as cell size, spatial distribution of the users, and the availability of channel state information (CSI) toward the total required transmit energy are analyzed. It is shown that the current macro-cellular architecture with co-located antennas has poor energy efficiency in the absence of CSI, but has better energy efficiency than small cells when perfect CSI is available. Moreover, macro-cells with distributed antennas have the best energy efficiency of all three architectures under perfect CSI. These results shed light on design guidelines to improve the energy efficiency of cellular network architectures.", "Assume that a multi-user multiple-input multiple-output (MIMO) system is designed from scratch to uniformly cover a given area with maximal energy efficiency (EE). What are the optimal number of antennas, active users, and transmit power? The aim of this paper is to answer this fundamental question. We consider jointly the uplink and downlink with different processing schemes at the base station and propose a new realistic power consumption model that reveals how the above parameters affect the EE. Closed-form expressions for the EE-optimal value of each parameter, when the other two are fixed, are provided for zero-forcing (ZF) processing in single-cell scenarios. These expressions prove how the parameters interact. For example, in sharp contrast to common belief, the transmit power is found to increase (not to decrease) with the number of antennas. This implies that energy-efficient systems can operate in high signal-to-noise ratio regimes in which interference-suppressing signal processing is mandatory. Numerical and analytical results show that the maximal EE is achieved by a massive MIMO setup wherein hundreds of antennas are deployed to serve a relatively large number of users using ZF processing. The numerical results show the same behavior under imperfect channel state information and in symmetric multi-cell scenarios.", "We obtain Shannon-theoretic limits for a very simple cellular multiple-access system. In our model the received signal at a given cell site is the sum of the signals transmitted from within that cell plus a factor spl alpha (0 spl les spl alpha spl les 1) times the sum of the signals transmitted from the adjacent cells plus ambient Gaussian noise. Although this simple model is scarcely realistic, it nevertheless has enough meat so that the results yield considerable insight into the workings of real systems. We consider both a one dimensional linear cellular array and the familiar two-dimensional hexagonal cellular pattern. The discrete-time channel is memoryless. We assume that N contiguous cells have active transmitters in the one-dimensional case, and that N sup 2 contiguous cells have active transmitters in the two-dimensional case. There are K transmitters per cell. Most of our results are obtained for the limiting case as N spl rarr spl infin . The results include the following. (1) We define C sub N ,C spl circ sub N as the largest achievable rate per transmitter in the usual Shannon-theoretic sense in the one- and two-dimensional cases, respectively (assuming that all signals are jointly decoded). We find expressions for limN spl rarr spl infin C sub N and limN spl rarr spl infin C spl circ sub N . (2) As the interference parameter spl alpha increases from 0, C sub N and C spl circ sub N increase or decrease according to whether the signal-to-noise ratio is less than or greater than unity. (3) Optimal performance is attainable using TDMA within the cell, but using TDMA for adjacent cells is distinctly suboptimal. (4) We suggest a scheme which does not require joint decoding of all the users, and is, in many cases, close to optimal. >" ] }
1505.01173
1917455104
In this paper, we propose several novel deep learning methods for object saliency detection based on the powerful convolutional neural networks. In our approach, we use a gradient descent method to iteratively modify an input image based on the pixel-wise gradients to reduce a cost function measuring the class-specific objectness of the image. The pixel-wise gradients can be efficiently computed using the back-propagation algorithm. The discrepancy between the modified image and the original one may be used as a saliency map for the image. Moreover, we have further proposed several new training methods to learn saliency-specific convolutional nets for object saliency detection, in order to leverage the available pixel-wise segmentation information. Our methods are extremely computationally efficient (processing 20-40 images per second in one GPU). In this work, we use the computed saliency maps for image segmentation. Experimental results on two benchmark tasks, namely Microsoft COCO and Pascal VOC 2012, have shown that our proposed methods can generate high-quality salience maps, clearly outperforming many existing methods. In particular, our approaches excel in handling many difficult images, which contain complex background, highly-variable salient objects, multiple objects, and or very small salient objects.
Recently, some deep learning techniques have been proposed for object detection and semantic image segmentation @cite_9 @cite_15 @cite_19 . These methods typically use DCNNs to examine a large number of region proposals from other algorithms, and use the features generated by DCNNs along with other post-stage classifiers to localize the target objects. They initially rely on bounding boxes for object detection. More recently, more and more methods are proposed to directly generate pixel-wise image segmentation, e.g. @cite_19 . In this paper, instead of directly generating the high-level semantic segmentation from DCNNs, we propose to use DCNNs to generate middle-level saliency maps in a very efficient way, which may be fed to other traditional computer vision algorithms for various vision tasks, such as semantic segmentation, video tracking, etc.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_15" ], "mid": [ "", "1487583988", "2102605133" ], "abstract": [ "", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn." ] }
1505.00946
1865017480
Anycast routing is an IP solution that allows packets to be routed to the topologically nearest server. Over the last years it has been commonly adopted to manage some services running on top of UDP, e.g., public DNS resolvers, multicast rendez-vous points, etc. However, recently the Internet have witnessed the growth of new Anycast-enabled Content Delivery Networks (A-CDNs) such as CloudFlare and EdgeCast, which provide their web services (i.e., TCP traffic) entirely through anycast. To the best of our knowledge, little is known in the literature about the nature and the dynamic of such traffic. For instance, since anycast depends on the routing, the question is how stable are the paths toward the nearest server. To bring some light on this question, in this work we provide a first look at A-CDNs traffic by combining active and passive measurements. In particular, building upon our previous work, we use active measurements to identify and geolocate A-CDNs caches starting from a large set of IP addresses related to the top-100k Alexa websites. We then look at the traffic of those caches in the wild using a large passive dataset collected from a European ISP. We find that several A-CDN servers are encountered on a daily basis when browsing the Internet. Routes to A-CDN servers are very stable, with few changes that are observed on a monthly-basis (in contrast to more the dynamic traffic policies of traditional CDNs). Overall, A-CDNs are a reality worth further investigations.
A large body of work in the literature investigates the impact of anycast usage on service performance by measuring server proximity @cite_37 @cite_1 @cite_6 @cite_15 @cite_30 , client-server affinity @cite_40 @cite_29 @cite_39 @cite_6 @cite_15 @cite_31 , server availability @cite_1 @cite_15 @cite_12 , and load-balancing @cite_7 @cite_41 . Several studies @cite_32 @cite_9 @cite_0 propose architectural improvements to address the performance shortcomings of IP anycast in terms of scalability and server selection. More recently, there has been a renewed interest in IP anycast and particularly in techniques to detect anycast usage @cite_24 , and to enumerate @cite_19 and geolocate @cite_26 anycast replicas. While in @cite_26 the focus is only on DNS servers, in this work, we apply the same anycast enumeration and geolocation technique to form an initial census of anycast IP addresses serving web traffic.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_31", "@cite_26", "@cite_7", "@cite_41", "@cite_29", "@cite_9", "@cite_1", "@cite_32", "@cite_6", "@cite_39", "@cite_0", "@cite_24", "@cite_40", "@cite_19", "@cite_15", "@cite_12" ], "mid": [ "", "", "", "", "2168836238", "", "", "", "1996319211", "1979869325", "", "", "2070125496", "", "", "2156127222", "", "" ], "abstract": [ "", "", "", "", "Despite its growing use in critical infrastructure services, the performance of IP(v4) Anycast and its interaction with IP routing practices is not well understood. In this paper, we present the results of a detailed measurement study of IP Anycast. Our study uses a two-pronged approach. First, using a variant of known latency estimation techniques, we measure the performance of current commercially operational IP Anycast deployments from a large number (>20,000) of vantage points. Second, we deploy our own small-scale anycast service that allows us to perform controlled tests under different deployment and failure scenarios. To the best of our knowledge, our study represents the first large-scale evaluation of existing anycast services and the first evaluation of the behavior of IP Anycast under failure.We find that: (1) IP Anycast, if deployed in an ad-hoc manner, does not offer good latency-based proximity, (2) IP Anycast, if deployed in an ad-hoc manner, does not provide fast failover to clients, (3) IP Anycast typically offers good affinity to all clients with the exception of those that explicitly load balance traffic across multiple providers, (4) IP Anycast, by itself, is not effective in balancing client load across multiple sites. We thus propose and evaluate practical means by which anycast deployments can achieve good proximity, fast failover and control over the distribution of client load. Overall, our results suggest that an IP Anycast service, if deployed carefully, can offer good proximity, load balance, and failover behavior.", "", "", "", "We present the initial results from our evaluation study on the performance implications of anycast in DNS, using four anycast servers deployed at top-level DNS zones. Our results show that 15 to 55 of the queries sent to an anycast group, are answered by the topologically closest server and at least 10 of the queries experience an additional delay in the order of 100ms. While increased availability is one of the supposed advantages of anycast, we found that outages can last up to multiple minutes, mainly due to slow BGP convergence. On the other hand, the number of outages observed was fairly small, suggesting that anycast provides a generally stable service.", "IP anycast, with its innate ability to find nearby resources in a robust and efficient fashion, has long been considered an important means of service discovery. The growth of P2P applications presents appealing new uses for IP anycast. Unfortunately, IP anycast suffers from serious problems: it is very hard to deploy globally, it scales poorly by the number of anycast groups, and it lacks important features like load-balancing. As a result, its use is limited to a few critical infrastructure services such as DNS root servers. The primary contribution of this paper is a new IP anycast architecture, PIAS, that overcomes these problems while largely maintaining the strengths of IP anycast. PIAS makes use of a proxy overlay that advertises IP anycast addresses on behalf of group members and tunnels anycast packets to those members. The paper presents a detailed design of PIAS and evaluates its scalability and efficiency through simulation. We also present preliminary measurement results on anycasted DNS root servers that suggest that IP anycast provides good affinity. Finally, we describe how PIAS supports two important P2P and overlay applications.", "", "", "This paper proposes GIA, a scalable architecture for global IP-anycast. Existing designs for providing IP-anycast must either globally distribute routes to individual anycast groups, or confine each anycast group to a pre-configured topological region. The first approach does not scale because of excessive growth in the routing tables, whereas the second one severely limits the utility of the service. Our design scales by dividing inter-domain anycast routing into two components. The first component builds inexpensive default anycast routes that consume no bandwidth or storage space. The second component, controlled by the edge domains, generates enhanced anycast routes that are customized according to the beneficiary domain's interests. We evaluate the performance of our design using simulation, and prove its practicality by implementing it in the Multi-threaded Routing Toolkit.", "", "", "IP anycast is a central part of production DNS. While prior work has explored proximity, affinity and load balancing for some anycast services, there has been little attention to third-party discovery and enumeration of components of an anycast service. Enumeration can reveal abnormal service configurations, benign masquerading or hostile hijacking of anycast services, and help characterize anycast deployment. In this paper, we discuss two methods to identify and characterize anycast nodes. The first uses an existing anycast diagnosis method based on CHAOS-class DNS records but augments it with traceroute to resolve ambiguities. The second proposes Internet-class DNS records which permit accurate discovery through the use of existing recursive DNS infrastructure. We validate these two methods against three widely-used anycast DNS services, using a very large number (60k and 300k) of vantage points, and show that they can provide excellent precision and recall. Finally, we use these methods to evaluate anycast deployments in top-level domains (TLDs), and find one case where a third-party operates a server masquerading as a root DNS anycast node as well as a noticeable proportion of unusual DNS proxies. We also show that, across all TLDs, up to 72 use anycast.", "", "" ] }
1505.00526
2951577062
In this paper, we consider the problem of column subset selection. We present a novel analysis of the spectral norm reconstruction for a simple randomized algorithm and establish a new bound that depends explicitly on the sampling probabilities. The sampling dependent error bound (i) allows us to better understand the tradeoff in the reconstruction error due to sampling probabilities, (ii) exhibits more insights than existing error bounds that exploit specific probability distributions, and (iii) implies better sampling distributions. In particular, we show that a sampling distribution with probabilities proportional to the square root of the statistical leverage scores is always better than uniform sampling and is better than leverage-based sampling when the statistical leverage scores are very nonuniform. And by solving a constrained optimization problem related to the error bound with an efficient bisection search we are able to achieve better performance than using either the leverage-based distribution or that proportional to the square root of the statistical leverage scores. Numerical simulations demonstrate the benefits of the new sampling distributions for low-rank matrix approximation and least square approximation compared to state-of-the art algorithms.
There are much more work on studying the Frobenius norm reconstruction of CSS @cite_2 @cite_22 @cite_11 @cite_3 @cite_20 . For more references, we refer the reader to the survey @cite_6 . It remains an interesting question to establish sampling dependent error bounds for other randomized matrix algorithms.
{ "cite_N": [ "@cite_22", "@cite_3", "@cite_6", "@cite_2", "@cite_20", "@cite_11" ], "mid": [ "", "2120872934", "2107411554", "2899347127", "2950958145", "2547648546" ], "abstract": [ "", "This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low-rank matrix. These results improve on prior work by Candes and Recht (2009), Candes and Tao (2009), and (2009). The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory.", "This paper presents new probability inequalities for sums of independent, random, self-adjoint matrices. These results place simple and easily verifiable hypotheses on the summands, and they deliver strong conclusions about the large-deviation behavior of the maximum eigenvalue of the sum. Tail bounds for the norm of a sum of random rectangular matrices follow as an immediate corollary. The proof techniques also yield some information about matrix-valued martingales. In other words, this paper provides noncommutative generalizations of the classical bounds associated with the names Azuma, Bennett, Bernstein, Chernoff, Hoeffding, and McDiarmid. The matrix inequalities promise the same diversity of application, ease of use, and strength of conclusion that have made the scalar inequalities so valuable.", "Given an m ×n matrix A and an integer k less than the rank of A, the “best” rank k approximation to A that minimizes the error with respect to the Frobenius norm is Ak, which is obtained by projecting A on the top k left singular vectors of A. While Ak is routinely used in data analysis, it is difficult to interpret and understand it in terms of the original data, namely the columns and rows of A. For example, these columns and rows often come from some application domain, whereas the singular vectors are linear combinations of (up to all) the columns or rows of A. We address the problem of obtaining low-rank approximations that are directly interpretable in terms of the original columns or rows of A. Our main results are two polynomial time randomized algorithms that take as input a matrix A and return as output a matrix C, consisting of a “small” (i.e., a low-degree polynomial in k, 1 e, and log(1 δ)) number of actual columns of A such that ||A–CC+A||F ≤(1+e) ||A–Ak||F with probability at least 1–δ. Our algorithms are simple, and they take time of the order of the time needed to compute the top k right singular vectors of A. In addition, they sample the columns of A via the method of “subspace sampling,” so-named since the sampling probabilities depend on the lengths of the rows of the top singular vectors and since they ensure that we capture entirely a certain subspace of interest.", "We consider the problem of selecting the best subset of exactly @math columns from an @math matrix @math . We present and analyze a novel two-stage algorithm that runs in @math time and returns as output an @math matrix @math consisting of exactly @math columns of @math . In the first (randomized) stage, the algorithm randomly selects @math columns according to a judiciously-chosen probability distribution that depends on information in the top- @math right singular subspace of @math . In the second (deterministic) stage, the algorithm applies a deterministic column-selection procedure to select and return exactly @math columns from the set of columns selected in the first stage. Let @math be the @math matrix containing those @math columns, let @math denote the projection matrix onto the span of those columns, and let @math denote the best rank- @math approximation to the matrix @math . Then, we prove that, with probability at least 0.8, @math This Frobenius norm bound is only a factor of @math worse than the best previously existing existential result and is roughly @math better than the best previous algorithmic result for the Frobenius norm version of this Column Subset Selection Problem (CSSP). We also prove that, with probability at least 0.8, @math This spectral norm bound is not directly comparable to the best previously existing bounds for the spectral norm version of this CSSP. Our bound depends on @math , whereas previous results depend on @math ; if these two quantities are comparable, then our bound is asymptotically worse by a @math factor.", "We consider low-rank reconstruction of a matrix using a subset of its columns and present asymptotically optimal algorithms for both spectral norm and Frobenius norm reconstruction. The main tools we introduce to obtain our results are (i) the use of fast approximate SVD-like decompositions for column-based matrix reconstruction, and (ii) two deterministic algorithms for selecting rows from matrices with orthonormal columns, building upon the sparse representation theorem for decompositions of the identity that appeared in [J. D. Batson, D. A. Spielman, and N. Srivastava, Twice-Ramanujan sparsifiers, in Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC), 2009, pp. 255--262]." ] }
1505.00687
2953259386
Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to the same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.
However, it is not clear if static images is the right way to learn visual representations. Therefore, researchers have started focusing on learning feature representations using videos @cite_15 @cite_9 @cite_52 @cite_39 @cite_21 @cite_3 @cite_29 @cite_44 @cite_14 . Early work such as @cite_1 focused on inclusion of constraints via video to autoencoder framework. The most common constraint is enforcing learned representations to be temporally smooth. Similar to this, @cite_3 proposed to learn auto-encoders based on the slowness prior. Other approaches such as @cite_29 trained convolutional gated RBMs to learn latent representations from pairs of successive images. This was extended in a recent work by @cite_16 where they proposed to learn a LSTM model in an unsupervised manner to predict future frames.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_21", "@cite_29", "@cite_52", "@cite_3", "@cite_39", "@cite_44", "@cite_1", "@cite_15", "@cite_16" ], "mid": [ "2122543342", "", "", "1586730761", "", "2950760473", "", "2145038566", "", "2096388912", "2952453038" ], "abstract": [ "We present an algorithm that learns invariant features from real data in an entirely unsupervised fashion. The principal benefit of our method is that it can be applied without human intervention to a particular application or data set, learning the specific invariances necessary for excellent feature performance on that data. Our algorithm relies on the ability to track image patches over time using optical flow. With the wide availability of high frame rate video (eg: on the web, from a robot), good tracking is straightforward to achieve. The algorithm then optimizes feature parameters such that patches corresponding to the same physical location have feature descriptors that are as similar as possible while simultaneously maximizing the distinctness of descriptors for different locations. Thus, our method captures data or application specific invariances yet does not require any manual supervision. We apply our algorithm to learn domain-optimized versions of SIFT and HOG. SIFT and HOG features are excellent and widely used. However, they are general and by definition not tailored to a specific domain. Our domain-optimized versions offer a substantial performance increase for classification and correspondence tasks we consider. Furthermore, we show that the features our method learns are near the optimal that would be achieved by directly optimizing the test set performance of a classifier. Finally, we demonstrate that the learning often allows fewer features to be used for some tasks, which has the potential to dramatically improve computational concerns for very large data sets.", "", "", "We address the problem of learning good features for understanding video data. We introduce a model that learns latent representations of image sequences from pairs of successive images. The convolutional architecture of our model allows it to scale to realistic image sizes whilst using a compact parametrization. In experiments on the NORB dataset, we show our model extracts latent \"flow fields\" which correspond to the transformation between the pair of input frames. We also use our model to extract low-level motion features in a multi-stage architecture for action recognition, demonstrating competitive performance on both the KTH and Hollywood2 datasets.", "", "Current state-of-the-art classification and detection algorithms rely on supervised training. In this work we study unsupervised feature learning in the context of temporally coherent video data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We establish a connection between slow feature learning to metric learning and show that the trained encoder can be used to define a more temporally and semantically coherent metric.", "", "This work proposes a learning method for deep architectures that takes advantage of sequential data, in particular from the temporal coherence that naturally exists in unlabeled video recordings. That is, two successive frames are likely to contain the same object or objects. This coherence is used as a supervisory signal over the unlabeled data, and is used to improve the performance on a supervised task of interest. We demonstrate the effectiveness of this method on some pose invariant object and face recognition tasks.", "", "The visual system can reliably identify objects even when the retinal image is transformed considerably by commonly occurring changes in the environment. A local learning rule is proposed, which allows a network to learn to generalize across such transformations. During the learning phase, the network is exposed to temporal sequences of patterns undergoing the transformation. An application of the algorithm is presented in which the network learns invariance to shift in retinal position. Such a principle may be involved in the development of the characteristic shift invariance property of complex cells in the primary visual cortex, and also in the development of more complicated invariance properties of neurons in higher visual areas.", "We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance." ] }
1505.00687
2953259386
Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to the same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.
Finally, our work is also related to metric learning via deep networks @cite_43 @cite_48 @cite_18 @cite_17 @cite_12 @cite_35 @cite_20 . For example, @cite_18 proposed to learn convolutional networks in a siamese architecture for face verification. @cite_43 introduced a deep triplet ranking network to learn fine-grained image similarity. @cite_5 optimized the max-margin loss on triplet units to learn deep hashing function for image retrieval. However, all these methods required labeled data. Our work is also related to @cite_38 , which used CNN pre-trained on ImageNet classification and detection dataset as initialization, and performed semi-supervised learning in videos to tackle object detection in target domain. However, in our work, we propose an unsupervised approach instead of semi-supervised algorithm.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_38", "@cite_48", "@cite_43", "@cite_5", "@cite_20", "@cite_12", "@cite_17" ], "mid": [ "", "1839408879", "1659581753", "", "1975517671", "1951304353", "1909903157", "", "" ], "abstract": [ "", "Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.", "Intuitive observations show that a baby may inherently possess the capability of recognizing a new visual concept (e.g., chair, dog) by learning from only very few positive instances taught by parent(s) or others, and this recognition capability can be gradually further improved by exploring and or interacting with the real instances in the physical world. Inspired by these observations, we propose a computational model for slightly-supervised object detection, based on prior knowledge modelling, exemplar learning and learning with video contexts. The prior knowledge is modeled with a pre-trained Convolutional Neural Network (CNN). When very few instances of a new concept are given, an initial concept detector is built by exemplar learning over the deep features from the pre-trained CNN. Simulating the baby's interaction with physical world, the well-designed tracking solution is then used to discover more diverse instances from the massive online unlabeled videos. Once a positive instance is detected identified with high score in each video, more variable instances possibly from different view-angles and or different distances are tracked and accumulated. Then the concept detector can be fine-tuned based on these new instances. This process can be repeated again and again till we obtain a very mature concept detector. Extensive experiments on Pascal VOC-07 10 12 object detection datasets well demonstrate the effectiveness of our framework. It can beat the state-of-the-art full-training based performances by learning from very few samples for each object category, along with about 20,000 unlabeled videos.", "", "Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.", "Extracting informative image features and learning effective approximate hashing functions are two crucial steps in image retrieval. Conventional methods often study these two steps separately, e.g., learning hash functions from a predefined hand-crafted feature space. Meanwhile, the bit lengths of output hashing codes are preset in the most previous methods, neglecting the significance level of different bits and restricting their practical flexibility. To address these issues, we propose a supervised learning framework to generate compact and bit-scalable hashing codes directly from raw images. We pose hashing learning as a problem of regularized similarity learning. In particular, we organize the training images into a batch of triplet samples, each sample containing two images with the same label and one with a different label. With these triplet samples, we maximize the margin between the matched pairs and the mismatched pairs in the Hamming space. In addition, a regularization term is introduced to enforce the adjacency consistency, i.e., images of similar appearances should have similar codes. The deep convolutional neural network is utilized to train the model in an end-to-end fashion, where discriminative image features and hash functions are simultaneously optimized. Furthermore, each bit of our hashing codes is unequally weighted, so that we can manipulate the code lengths by truncating the insignificant bits. Our framework outperforms state-of-the-arts on public benchmarks of similar image search and also achieves promising results in the application of person re-identification in surveillance. It is also shown that the generated bit-scalable hashing codes well preserve the discriminative powers with shorter code lengths.", "Detecting poorly textured objects and estimating their 3D pose reliably is still a very challenging problem. We introduce a simple but powerful approach to computing descriptors for object views that efficiently capture both the object identity and 3D pose. By contrast with previous manifold-based approaches, we can rely on the Euclidean distance to evaluate the similarity between descriptors, and therefore use scalable Nearest Neighbor search methods to efficiently handle a large number of objects under a large range of poses. To achieve this, we train a Convolutional Neural Network to compute these descriptors by enforcing simple similarity and dissimilarity constraints between the descriptors. We show that our constraints nicely untangle the images from different objects and different views into clusters that are not only well-separated but also structured as the corresponding sets of poses: The Euclidean distance between descriptors is large when the descriptors are from different objects, and directly related to the distance between the poses when the descriptors are from the same object. These important properties allow us to outperform state-of-the-art object views representations on challenging RGB and RGB-D data.", "", "" ] }
1505.00810
2952266820
Machine-to-machine (M2M) communication's severe power limitations challenge the interconnectivity, access management, and reliable communication of data. In densely deployed M2M networks, controlling and aggregating the generated data is critical. We propose an energy efficient data aggregation scheme for a hierarchical M2M network. We develop a coverage probability-based optimal data aggregation scheme for M2M devices to minimize the average total energy expenditure per unit area per unit time or simply the energy density of an M2M communication network. Our analysis exposes the key tradeoffs between the energy density of the M2M network and the coverage characteristics for successive and parallel transmission schemes that can be either half-duplex or full-duplex. Comparing the rate and energy performances of the transmission models, we observe that successive mode and half-duplex parallel mode have better coverage characteristics compared to full-duplex parallel scheme. Simulation results show that the uplink coverage characteristics dominate the trend of the energy consumption for both successive and parallel schemes.
Unlike most human generated or consumed traffic, M2M as defined in this paper is characterized by a very large number of small transactions, often from battery powered devices. The power and energy optimal uplink design for various access strategies is studied in @cite_22 , while an optimal uncoordinated strategy to maximize the average throughput for a time-slotted RACH is developed in @cite_6 . For the small payload sizes relevant for M2M, a strategy that transmits both identity and data over the RACH is shown to support significantly more devices compared to the conventional approach, where transmissions are scheduled after an initial random-access stage.
{ "cite_N": [ "@cite_22", "@cite_6" ], "mid": [ "2132247362", "2962894043" ], "abstract": [ "To enable full mechanical automation where each smart device can play multiple roles among sensor, decision maker, and action executor, it is essential to construct scrupulous connections among all devices. Machine-to-machine communications thus emerge to achieve ubiquitous communications among all devices. With the merit of providing higher-layer connections, scenarios of 3GPP have been regarded as the promising solution facilitating M2M communications, which is being standardized as an emphatic application to be supported by LTE-Advanced. However, distinct features in M2M communications create diverse challenges from those in human-to-human communications. To deeply understand M2M communications in 3GPP, in this article, we provide an overview of the network architecture and features of M2M communications in 3GPP, and identify potential issues on the air interface, including physical layer transmissions, the random access procedure, and radio resources allocation supporting the most critical QoS provisioning. An effective solution is further proposed to provide QoS guarantees to facilitate M2M applications with inviolable hard timing constraints.", "For wireless systems in which randomly arriving devices attempt to transmit a fixed payload to a central receiver, we develop a framework to characterize the system throughput as a function of arrival rate and per-device data rate. The frame- work considers both coordinated transmission (where devices are scheduled) and uncoordinated transmission (where devices com- municate on a random access channel and a provision is made for retransmissions). Our main contribution is a novel character- ization of the optimal throughput for the case of uncoordinated transmission and a strategy for achieving this throughput that relies on overlapping transmissions and joint decoding. Simula- tions for a noise-limited cellular network show that the optimal strategy provides a factor of four improvement in throughput compared with slotted ALOHA. We apply our framework to eval- uate more general system-level designs that account for overhead signaling. We demonstrate that, for small payload sizes relevant for machine-to-machine (M2M) communications (200 bits or less), a one-stage strategy, where identity and data are transmitted opti- mally over the random access channel, can support at least twice the number of devices compared with a conventional strategy, where identity is established over an initial random-access stage and data transmission is scheduled." ] }
1505.00810
2952266820
Machine-to-machine (M2M) communication's severe power limitations challenge the interconnectivity, access management, and reliable communication of data. In densely deployed M2M networks, controlling and aggregating the generated data is critical. We propose an energy efficient data aggregation scheme for a hierarchical M2M network. We develop a coverage probability-based optimal data aggregation scheme for M2M devices to minimize the average total energy expenditure per unit area per unit time or simply the energy density of an M2M communication network. Our analysis exposes the key tradeoffs between the energy density of the M2M network and the coverage characteristics for successive and parallel transmission schemes that can be either half-duplex or full-duplex. Comparing the rate and energy performances of the transmission models, we observe that successive mode and half-duplex parallel mode have better coverage characteristics compared to full-duplex parallel scheme. Simulation results show that the uplink coverage characteristics dominate the trend of the energy consumption for both successive and parallel schemes.
Different control mechanisms to avoid congestion caused by random channel access of M2M devices are reviewed @cite_21 . An adaptive slotted ALOHA scheme for random access control of M2M systems with bursty traffic that achieves near-optimal access delay performance is proposed @cite_13 . A comprehensive performance evaluation of the energy efficiency of the random access mechanism of LTE and its role for M2M is provided @cite_24 . An energy-efficient uplink design for LTE networks in M2M and human-to-human coexistence scenarios that satisfies quality-of-service (QoS) requirements is developed @cite_10 . Similar to @cite_24 @cite_10 , we study an energy-efficient design for M2M uplink where devices perform multi-hop transmissions. We also incorporate the coverage characteristics for different transmission modes using stochastic geometry.
{ "cite_N": [ "@cite_24", "@cite_10", "@cite_21", "@cite_13" ], "mid": [ "2141804605", "2093917343", "2110767187", "1966552195" ], "abstract": [ "The 3GPP has raised the need to revisit the design of next generations of cellular networks in order to make them capable and efficient to provide M2M services. One of the key challenges that has been identified is the need to enhance the operation of the random access channel of LTE and LTE-A. The current mechanism to request access to the system is known to suffer from congestion and overloading in the presence of a huge number of devices. For this reason, different research groups around the globe are working towards the design of more efficient ways of managing the access to these networks in such circumstances. This paper aims to provide a survey of the alternatives that have been proposed over the last years to improve the operation of the random access channel of LTE and LTE-A. A comprehensive discussion of the different alternatives is provided, identifying strengths and weaknesses of each one of them, while drawing future trends to steer the efforts over the same shooting line. In addition, while existing literature has been focused on the performance in terms of delay, the energy efficiency of the access mechanism of LTE will play a key role in the deployment of M2M networks. For this reason, a comprehensive performance evaluation of the energy efficiency of the random access mechanism of LTE is provided in this paper. The aim of this computer-based simulation study is to set a baseline performance upon which new and more energy-efficient mechanisms can be designed in the near future.", "Recently, energy efficiency in wireless networks has become an important objective. Aside from the growing proliferation of smartphones and other high-end devices in conventional human-to-human (H2H) communication, the introduction of machine-to-machine (M2M) communication or machine-type communication into cellular networks is another contributing factor. In this paper, we investigate quality-of-service (QoS)-driven energy-efficient design for the uplink of long term evolution (LTE) networks in M2M H2H co-existence scenarios. We formulate the resource allocation problem as a maximization of effective capacity-based bits-per-joule capacity under statistical QoS provisioning. The specific constraints of single carrier frequency division multiple access (uplink air interface in LTE networks) pertaining to power and resource block allocation not only complicate the resource allocation problem, but also render the standard Lagrangian duality techniques inapplicable. We overcome the analytical and computational intractability by first transforming the original problem into a mixed integer programming (MIP) problem and then formulating its dual problem using the canonical duality theory. The proposed energy-efficient design is compared with the spectral efficient design along with round robin (RR) and best channel quality indicator (BCQI) algorithms. Numerical results, which are obtained using the invasive weed optimization (IWO) algorithm, show that the proposed energy-efficient uplink design not only outperforms other algorithms in terms of energy efficiency while satisfying the QoS requirements, but also performs closer to the optimal design.", "Machine-to-machine communication, a promising technology for the smart city concept, enables ubiquitous connectivity between one or more autonomous devices without or with minimal human interaction. M2M communication is the key technology to support data transfer among sensors and actuators to facilitate various smart city applications (e.g., smart metering, surveillance and security, infrastructure management, city automation, and eHealth). To support massive numbers of machine type communication (MTC) devices, one of the challenging issues is to provide an efficient way for multiple access in the network and to minimize network overload. In this article, we review the M2M communication techniques in Long Term Evolution- Advanced cellular networks and outline the major research issues. Also, we review the different random access overload control mechanisms to avoid congestion caused by random channel access of MTC devices. To this end, we propose a reinforcement learning-based eNB selection algorithm that allows the MTC devices to choose the eNBs (or base stations) to transmit packets in a self-organizing fashion.", "Supporting massive device transmission is challenging in machine-to-machine (M2M) communications. Particularly, in event-driven M2M communications, a large number of devices become activated within a short period of time, which in turn causes high radio congestions and severe access delay. To address this issue, we propose a Fast Adaptive S-ALOHA (FASA) scheme for random access control of M2M communication systems with bursty traffic. Instead of the observation in a single slot, the statistics of consecutive idle and collision slots are used in FASA to accelerate the tracking process of network status that is critical for optimizing S-ALOHA systems. With a design based on drift analysis, the estimate of the number of the active devices under FASA converges fast to the true value. Furthermore, by examining the T-slot drifts, we prove that the proposed FASA scheme is stable as long as the average arrival rate is smaller than e-1, in the sense that the Markov chain derived from the scheme is geometrically ergodic. Simulation results demonstrate that under highly bursty traffic, the proposed FASA scheme outperforms traditional additive schemes such as PB-ALOHA and achieves near-optimal performance in reducing access delays. Moreover, compared to multiplicative schemes, FASA shows its robustness under heavy traffic load in addition to better delay performance." ] }
1505.00810
2952266820
Machine-to-machine (M2M) communication's severe power limitations challenge the interconnectivity, access management, and reliable communication of data. In densely deployed M2M networks, controlling and aggregating the generated data is critical. We propose an energy efficient data aggregation scheme for a hierarchical M2M network. We develop a coverage probability-based optimal data aggregation scheme for M2M devices to minimize the average total energy expenditure per unit area per unit time or simply the energy density of an M2M communication network. Our analysis exposes the key tradeoffs between the energy density of the M2M network and the coverage characteristics for successive and parallel transmission schemes that can be either half-duplex or full-duplex. Comparing the rate and energy performances of the transmission models, we observe that successive mode and half-duplex parallel mode have better coverage characteristics compared to full-duplex parallel scheme. Simulation results show that the uplink coverage characteristics dominate the trend of the energy consumption for both successive and parallel schemes.
Because low-power M2M devices may not be able to communicate with the BS directly, hierarchical architectures may be necessary. Hence, critical design issues also include optimizing hierarchical organization of the devices and energy efficient data aggregation. Although these issues have not been studied in the context of M2M, there is prior work on distributed networks in the context of wired communications. In @cite_25 , energy consumption is optimized by studying a distributed protocol for stationary ad hoc networks. In @cite_20 , a distribution problem which consists of subscribers, distribution and concentration points for a wired network model is studied to minimize the cost by optimizing the density of distribution points. In @cite_11 , a hierarchical network including sinks, aggregators and sensors is proposed, which yields significant energy savings.
{ "cite_N": [ "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2170469173", "2293042113", "2112322703" ], "abstract": [ "We describe a distributed position-based network protocol optimized for minimum energy consumption in mobile wireless networks that support peer-to-peer communications. Given any number of randomly deployed nodes over an area, we illustrate that a simple local optimization scheme executed at each node guarantees strong connectivity of the entire network and attains the global minimum energy solution for stationary networks. Due to its localized nature, this protocol proves to be self-reconfiguring and stays close to the minimum energy solution when applied to mobile networks. Simulation results are used to verify the performance of the protocol.", "", "In this paper, we study how to reduce energy consumption in large-scale sensor networks, which systematically sample a spatio-temporal field. We begin by formulating a distributed compression problem subject to aggregation (energy) costs to a single sink. We show that the optimal solution is greedy and based on ordering sensors according to their aggregation costs-typically related to proximity-and, perhaps surprisingly, it is independent of the distribution of data sources. Next, we consider a simplified hierarchical model for a sensor network including multiple sinks, compressors aggregation nodes, and sensors. Using a reasonable metric for energy cost, we show that the optimal organization of devices is associated with a Johnson-Mehl tessellation induced by their locations. Drawing on techniques from stochastic geometry, we analyze the energy savings that optimal hierarchies provide relative to previously proposed organizations based on proximity, i.e., associated Voronoi tessellations. Our analysis and simulations show that an optimal organization of aggregation compression can yield 8 -28 energy savings depending on the compression ratio." ] }
1505.00810
2952266820
Machine-to-machine (M2M) communication's severe power limitations challenge the interconnectivity, access management, and reliable communication of data. In densely deployed M2M networks, controlling and aggregating the generated data is critical. We propose an energy efficient data aggregation scheme for a hierarchical M2M network. We develop a coverage probability-based optimal data aggregation scheme for M2M devices to minimize the average total energy expenditure per unit area per unit time or simply the energy density of an M2M communication network. Our analysis exposes the key tradeoffs between the energy density of the M2M network and the coverage characteristics for successive and parallel transmission schemes that can be either half-duplex or full-duplex. Comparing the rate and energy performances of the transmission models, we observe that successive mode and half-duplex parallel mode have better coverage characteristics compared to full-duplex parallel scheme. Simulation results show that the uplink coverage characteristics dominate the trend of the energy consumption for both successive and parallel schemes.
Hierarchical networks can provide efficient data aggregation in M2M or other power-limited systems to enable successful end-to-end transmission. Despite previous research efforts, e.g., @cite_15 , @cite_1 and @cite_18 , to the best of our knowledge, there has been no study focusing on data aggregation schemes for M2M networks together with the rate coverage characteristics, especially from an energy optimal design perspective. Providing such a study is the main contribution of this paper.
{ "cite_N": [ "@cite_15", "@cite_1", "@cite_18" ], "mid": [ "", "2167465093", "2069417310" ], "abstract": [ "", "A wireless network consisting of a large number of small sensors with low-power transceivers can be an effective tool for gathering data in a variety of environments. The data collected by each sensor is communicated through the network to a single processing center that uses all reported data to determine characteristics of the environment or detect an event. The communication or message passing process must be designed to conserve the limited energy resources of the sensors. Clustering sensors into groups, so that sensors communicate information only to clusterheads and then the clusterheads communicate the aggregated information to the processing center, may save energy. In this paper, we propose a distributed, randomized clustering algorithm to organize the sensors in a wireless sensor network into clusters. We then extend this algorithm to generate a hierarchy of clusterheads and observe that the energy savings increase with the number of levels in the hierarchy. Results in stochastic geometry are used to derive solutions for the values of parameters of our algorithm that minimize the total energy spent in the network when all sensors report data through the clusterheads to the processing center.", "As wireless sensor networks utilize battery-operated nodes, energy efficiency is of paramount importance at all levels of system design. In order to save energy in the transfer of data from the sensor nodes to one or more sinks, the data may be routed through other nodes rather than transmitting it directly to the sink(s). In this article, we investigate the problem of energy-efficient transmission of data over a noisy channel, focusing on the setting of physical-layer parameters. We derive a metric called the energy per successfully received bit, which specifies the expected energy required to transmit a bit successfully over a particular distance given a channel noise model. By minimizing this metric, we can find, for different modulation schemes, the energy-optimal relay distance and the optimal transmit energy as a function of channel noise level and path loss exponent. These results enable network designers to select the hop distance, transmit power, and or modulation scheme that maximize network lifetime." ] }
1505.00315
2950091256
In this paper, we propose to learn temporal embeddings of video frames for complex video analysis. Large quantities of unlabeled video data can be easily obtained from the Internet. These videos possess the implicit weak label that they are sequences of temporally and semantically coherent images. We leverage this information to learn temporal embeddings for video frames by associating frames with the temporal context that they appear in. To do this, we propose a scheme for incorporating temporal context based on past and future frames in videos, and compare this to other contextual representations. In addition, we show how data augmentation using multi-resolution samples and hard negatives helps to significantly improve the quality of the learned embeddings. We evaluate various design decisions for learning temporal embeddings, and show that our embeddings can improve performance for multiple video tasks such as retrieval, classification, and temporal order recovery in unconstrained Internet video.
Standard tasks in video such as classification and retrieval require a well engineered feature representation, with many proposed in the literature @cite_21 @cite_0 @cite_15 @cite_2 @cite_27 @cite_7 @cite_42 @cite_32 @cite_19 @cite_29 @cite_22 . Deep network features learned from spatial data @cite_38 @cite_39 @cite_17 and temporal flow @cite_17 have also shown comparable results. However, recent works in complex event recognition @cite_5 @cite_30 have shown that spatial Convolutional Neural Network (CNN) features learned from ImageNet @cite_11 without fine-tuning on video, accompanied by suitable pooling and encoding strategies achieves state-of-the-art performance. In contrast to these methods which either propose handcrafted features or learn feature representations with a fully supervised objective from images or videos, we try to learn an embedding in an unsupervised fashion. Moreover, our learned features can be extended to other tasks beyond classification and retrieval.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_11", "@cite_22", "@cite_7", "@cite_29", "@cite_21", "@cite_42", "@cite_32", "@cite_39", "@cite_0", "@cite_19", "@cite_27", "@cite_2", "@cite_5", "@cite_15", "@cite_17" ], "mid": [ "1670387562", "1983364832", "2108598243", "2126574503", "2064458142", "1993229407", "", "", "", "2308045930", "1996904744", "2063153269", "", "2142194269", "2950076437", "1211924006", "1686810756" ], "abstract": [ "We conduct an in-depth exploration of different strategies for doing event detection in videos using convolutional neural networks (CNNs) trained for image classification. We study different ways of performing spatial and temporal pooling, feature normalization, choice of CNN layers as well as choice of classifiers. Making judicious choices along these dimensions led to a very significant increase in performance over more naive approaches that have been used till now. We evaluate our approach on the challenging TRECVID MED'14 dataset with two popular CNN architectures pretrained on ImageNet. On this MED'14 dataset, our methods, based entirely on image-trained CNN features, can outperform several state-of-the-art non-CNN models. Our proposed late fusion of CNN- and motion-based features can further increase the mean average precision (mAP) on MED'14 from 34.95 to 38.74 . The fusion approach achieves the state-of-the-art classification performance on the challenging UCF-101 dataset.", "We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.", "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports.", "We present a system for multimedia event detection. The developed system characterizes complex multimedia events based on a large array of multimodal features, and classifies unseen videos by effectively fusing diverse responses. We present three major technical innovations. First, we explore novel visual and audio features across multiple semantic granularities, including building, often in an unsupervised manner, mid-level and high-level features upon low-level features to enable semantic understanding. Second, we show a novel Latent SVM model which learns and localizes discriminative high-level concepts in cluttered video sequences. In addition to improving detection accuracy beyond existing approaches, it enables a unique summary for every retrieval by its use of high-level concepts and temporal evidence localization. The resulting summary provides some transparency into why the system classified the video as it did. Finally, we present novel fusion learning algorithms and our methodology to improve fusion learning under limited training data condition. Thorough evaluation on a large TRECVID MED 2011 dataset showcases the benefits of the presented system.", "Local space-time features have recently become a popular video representation for action recognition. Several methods for feature localization and description have been proposed in the literature and promising recognition results were demonstrated for a number of action classes. The comparison of existing methods, however, is often limited given the different experimental settings used. The purpose of this paper is to evaluate and compare previously proposed space-time features in a common experimental setup. In particular, we consider four different feature detectors and six local feature descriptors and use a standard bag-of-features SVM approach for action recognition. We investigate the performance of these methods on a total of 25 action classes distributed over three datasets with varying difficulty. Among interesting conclusions, we demonstrate that regular sampling of space-time features consistently outperforms all tested space-time interest point detectors for human actions in realistic settings. We also demonstrate a consistent ranking for the majority of methods over different datasets and discuss their advantages and limitations.", "", "", "", "", "Several recent works on action recognition have attested the importance of explicitly integrating motion characteristics in the video description. This paper establishes that adequately decomposing visual motion into dominant and residual motions, both in the extraction of the space-time trajectories and for the computation of descriptors, significantly improves action recognition algorithms. Then, we design a new motion descriptor, the DCS descriptor, based on differential motion scalar quantities, divergence, curl and shear features. It captures additional information on the local motion patterns enhancing results. Finally, applying the recent VLAD coding technique proposed in image retrieval provides a substantial improvement for action recognition. Our three contributions are complementary and lead to outperform all reported results by a significant margin on three challenging datasets, namely Hollywood 2, HMDB51 and Olympic Sports.", "Activity recognition in video is dominated by low- and mid-level features, and while demonstrably capable, by nature, these features carry little semantic meaning. Inspired by the recent object bank approach to image representation, we present Action Bank, a new high-level representation of video. Action bank is comprised of many individual action detectors sampled broadly in semantic space as well as viewpoint space. Our representation is constructed to be semantically rich and even when paired with simple linear SVM classifiers is capable of highly discriminative performance. We have tested action bank on four major activity recognition benchmarks. In all cases, our performance is better than the state of the art, namely 98.2 on KTH (better by 3.3 ), 95.0 on UCF Sports (better by 3.7 ), 57.9 on UCF50 (baseline is 47.9 ), and 26.9 on HMDB51 (baseline is 23.2 ). Furthermore, when we analyze the classifiers, we find strong transfer of semantics from the constituent action detectors to the bank classifier.", "", "The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.", "In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkit. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6 to 36.8 for the TRECVID MEDTest 14 dataset and from 34.0 to 44.6 for the TRECVID MEDTest 13 dataset. This work is the core part of the winning solution of our CMU-Informedia team in TRECVID MED 2014 competition.", "Human action recognition in videos is a challenging problem with wide applications. State-of-the-art approaches often adopt the popular bag-of-features representation based on isolated local patches or temporal patch trajectories, where motion patterns like object relationships are mostly discarded. This paper proposes a simple representation specifically aimed at the modeling of such motion relationships. We adopt global and local reference points to characterize motion information, so that the final representation can be robust to camera movement. Our approach operates on top of visual codewords derived from local patch trajectories, and therefore does not require accurate foreground-background separation, which is typically a necessary step to model object relationships. Through an extensive experimental evaluation, we show that the proposed representation offers very competitive performance on challenging benchmark datasets, and combining it with the bag-of-features representation leads to substantial improvement. On Hollywood2, Olympic Sports, and HMDB51 datasets, we obtain 59.5 , 80.6 and 40.7 respectively, which are the best reported results to date.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision." ] }
1505.00315
2950091256
In this paper, we propose to learn temporal embeddings of video frames for complex video analysis. Large quantities of unlabeled video data can be easily obtained from the Internet. These videos possess the implicit weak label that they are sequences of temporally and semantically coherent images. We leverage this information to learn temporal embeddings for video frames by associating frames with the temporal context that they appear in. To do this, we propose a scheme for incorporating temporal context based on past and future frames in videos, and compare this to other contextual representations. In addition, we show how data augmentation using multi-resolution samples and hard negatives helps to significantly improve the quality of the learned embeddings. We evaluate various design decisions for learning temporal embeddings, and show that our embeddings can improve performance for multiple video tasks such as retrieval, classification, and temporal order recovery in unconstrained Internet video.
There are several works which improve complex event recognition by combining multiple feature modalities @cite_23 @cite_10 @cite_16 . Another related line of work is the use of sub-events defined manually @cite_12 , or clustered from data @cite_1 to improve recognition. Similarly, used low dimensional features from deep belief nets and sparse coding @cite_18 . While these methods are targeted towards building features specifically for classification in limited settings, we propose a generic video frame representation which can capture semantic and temporal structure in videos.
{ "cite_N": [ "@cite_18", "@cite_1", "@cite_23", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "", "243985932", "1979246310", "2067646051", "2141939040", "1758470730" ], "abstract": [ "", "Complex events consist of various human interactions with different objects in diverse environments. The evidences needed to recognize events may occur in short time periods with variable lengths and can happen anywhere in a video. This fact prevents conventional machine learning algorithms from effectively recognizing the events. In this paper, we propose a novel method that can automatically identify the key evidences in videos for detecting complex events. Both static instances (objects) and dynamic instances (actions) are considered by sampling frames and temporal segments respectively. To compare the characteristic power of heterogeneous instances, we embed static and dynamic instances into a multiple instance learning framework via instance similarity measures, and cast the problem as an Evidence Selective Ranking (ESR) process. We impose l1 norm to select key evidences while using the Infinite Push Loss Function to enforce positive videos to have higher detection scores than negative videos. The Alternating Direction Method of Multipliers (ADMM) algorithm is used to solve the optimization problem. Experiments on large-scale video datasets show that our method can improve the detection accuracy while providing the unique capability in discovering key evidences of each complex event.", "This paper addresses the challenge of Multimedia Event Detection by proposing a novel method for high-level and low-level features fusion based on collective classification. Generally, the method consists of three steps: training a classifier from low-level features; encoding high-level features into graphs; and diffusing the scores on the established graph to obtain the final prediction. The final prediction is derived from multiple graphs each of which corresponds to a high-level feature. The paper investigates two graph construction methods using logarithmic and exponential loss functions, respectively and two collective classification algorithms, i.e. Gibbs sampling and Markov random walk. The theoretical analysis demonstrates that the proposed method converges and is computationally scalable and the empirical analysis on TRECVID 2011 Multimedia Event Detection dataset validates its outstanding performance compared to state-of-the-art methods, with an added benefit of interpretability.", "Low-level appearance as well as spatio-temporal features, appropriately quantized and aggregated into Bag-of-Words (BoW) descriptors, have been shown to be effective in many detection and recognition tasks. However, their effcacy for complex event recognition in unconstrained videos have not been systematically evaluated. In this paper, we use the NIST TRECVID Multimedia Event Detection (MED11 [1]) open source dataset, containing annotated data for 15 high-level events, as the standardized test bed for evaluating the low-level features. This dataset contains a large number of user-generated video clips. We consider 7 different low-level features, both static and dynamic, using BoW descriptors within an SVM approach for event detection. We present performance results on the 15 MED11 events for each of the features as well as their combinations using a number of early and late fusion strategies and discuss their strengths and limitations.", "Combining multiple low-level visual features is a proven and effective strategy for a range of computer vision tasks. However, limited attention has been paid to combining such features with information from other modalities, such as audio and videotext, for large scale analysis of web videos. In our work, we rigorously analyze and combine a large set of low-level features that capture appearance, color, motion, audio and audio-visual co-occurrence patterns in videos. We also evaluate the utility of high-level (i.e., semantic) visual information obtained from detecting scene, object, and action concepts. Further, we exploit multimodal information by analyzing available spoken and videotext content using state-of-the-art automatic speech recognition (ASR) and videotext recognition systems. We combine these diverse features using a two-step strategy employing multiple kernel learning (MKL) and late score level fusion methods. Based on the TRECVID MED 2011 evaluations for detecting 10 events in a large benchmark set of ∼45000 videos, our system showed the best performance among the 19 international teams.", "In this paper we address the challenging problem of complex event recognition by using low-level events. In this problem, each complex event is captured by a long video in which several low-level events happen. The dataset contains several videos and due to the large number of videos and complexity of the events, the available annotation for the low-level events is very noisy which makes the detection task even more challenging. To tackle these problems we model the joint relationship between the low-level events in a graph where we consider a node for each low-level event and whenever there is a correlation between two low-level events the graph has an edge between the corresponding nodes. In addition, for decreasing the effect of weak and or irrelevant low-level event detectors we consider the presence absence of low-level events as hidden variables and learn a discriminative model by using latent SVM formulation. Using our learned model for the complex event recognition, we can also apply it for improving the detection of the low-level events in video clips which enables us to discover a conceptual description of the video. Thus our model can do complex event recognition and explain a video in terms of low-level events in a single framework. We have evaluated our proposed method over the most challenging multimedia event detection dataset. The experimental results reveals that the proposed method performs well compared to the baseline method. Further, our results of conceptual description of video shows that our model is learned quite well to handle the noisy annotation and surpass the low-level event detectors which are directly trained on the raw features." ] }
1505.00315
2950091256
In this paper, we propose to learn temporal embeddings of video frames for complex video analysis. Large quantities of unlabeled video data can be easily obtained from the Internet. These videos possess the implicit weak label that they are sequences of temporally and semantically coherent images. We leverage this information to learn temporal embeddings for video frames by associating frames with the temporal context that they appear in. To do this, we propose a scheme for incorporating temporal context based on past and future frames in videos, and compare this to other contextual representations. In addition, we show how data augmentation using multi-resolution samples and hard negatives helps to significantly improve the quality of the learned embeddings. We evaluate various design decisions for learning temporal embeddings, and show that our embeddings can improve performance for multiple video tasks such as retrieval, classification, and temporal order recovery in unconstrained Internet video.
Learning features with unsupervised objectives has been a challenging task in the image and video domain @cite_36 @cite_35 @cite_13 . Notably, @cite_35 develops an Independent Subspace Analysis (ISA) model for feature learning using unlabeled video. Recent work from @cite_31 also hints at a similar approach to exploit the slowness prior in videos. Also, recent attempts extend such autoencoder techniques for next frame prediction in videos @cite_37 @cite_40 . These methods try to capitalize on the temporal continuity in videos to learn an LSTM @cite_6 representation for frame prediction. In contrast to these methods which aim to provide a unified representation for a complete temporal sequence, our work provides a simple yet powerful representation for independent video frames and images.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_36", "@cite_6", "@cite_40", "@cite_31", "@cite_13" ], "mid": [ "1999192586", "1568514080", "2096691069", "1591801644", "2952453038", "1699156674", "1586730761" ], "abstract": [ "Previous work on action recognition has focused on adapting hand-designed local features, such as SIFT or HOG, from static images to the video domain. In this paper, we propose using unsupervised feature learning as a way to learn features directly from video data. More specifically, we present an extension of the Independent Subspace Analysis algorithm to learn invariant spatio-temporal features from unlabeled video data. We discovered that, despite its simplicity, this method performs surprisingly well when combined with deep learning techniques such as stacking and convolution to learn hierarchical representations. By replacing hand-designed features with our learned features, we achieve classification results superior to all previous published results on the Hollywood2, UCF, KTH and YouTube action recognition datasets. On the challenging Hollywood2 and YouTube action datasets we obtain 53.3 and 75.8 respectively, which are approximately 5 better than the current best published results. Further benefits of this method, such as the ease of training and the efficiency of training and prediction, will also be discussed. You can download our code and learned spatio-temporal features here: http: ai.stanford.edu ∼wzou", "We propose a strong baseline model for unsupervised feature learning using video data. By learning to predict missing frames or extrapolate future frames from an input video sequence, the model discovers both spatial and temporal correlations which are useful to represent complex deformations and motion patterns. The models we propose are largely borrowed from the language modeling literature, and adapted to the vision domain by quantizing the space of image patches into a large dictionary. We demonstrate the approach on both a filling and a generation task. For the first time, we show that, after training on natural videos, such a model can predict non-trivial motions over short video sequences.", "We present a biologically-motivated system for the recognition of actions from video sequences. The approach builds on recent work on object recognition based on hierarchical feedforward architectures [25, 16, 20] and extends a neurobiological model of motion processing in the visual cortex [10]. The system consists of a hierarchy of spatio-temporal feature detectors of increasing complexity: an input sequence is first analyzed by an array of motion- direction sensitive units which, through a hierarchy of processing stages, lead to position-invariant spatio-temporal feature detectors. We experiment with different types of motion-direction sensitive units as well as different system architectures. As in [16], we find that sparse features in intermediate stages outperform dense ones and that using a simple feature selection approach leads to an efficient system that performs better with far fewer features. We test the approach on different publicly available action datasets, in all cases achieving the highest results reported to date.", "We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.", "We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.", "Current state-of-the-art classification and detection algorithms rely on supervised training. In this work we study unsupervised feature learning in the context of temporally coherent video data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We establish a connection between slow feature learning to metric learning and show that the trained encoder can be used to define a more temporally and semantically coherent metric.", "We address the problem of learning good features for understanding video data. We introduce a model that learns latent representations of image sequences from pairs of successive images. The convolutional architecture of our model allows it to scale to realistic image sizes whilst using a compact parametrization. In experiments on the NORB dataset, we show our model extracts latent \"flow fields\" which correspond to the transformation between the pair of input frames. We also use our model to extract low-level motion features in a multi-stage architecture for action recognition, demonstrating competitive performance on both the KTH and Hollywood2 datasets." ] }
1505.00315
2950091256
In this paper, we propose to learn temporal embeddings of video frames for complex video analysis. Large quantities of unlabeled video data can be easily obtained from the Internet. These videos possess the implicit weak label that they are sequences of temporally and semantically coherent images. We leverage this information to learn temporal embeddings for video frames by associating frames with the temporal context that they appear in. To do this, we propose a scheme for incorporating temporal context based on past and future frames in videos, and compare this to other contextual representations. In addition, we show how data augmentation using multi-resolution samples and hard negatives helps to significantly improve the quality of the learned embeddings. We evaluate various design decisions for learning temporal embeddings, and show that our embeddings can improve performance for multiple video tasks such as retrieval, classification, and temporal order recovery in unconstrained Internet video.
The idea of embedding words to a dense lower dimension vector space has been prevalent in the NLP community. The word2vec model @cite_8 tries to learn embeddings such that words with similar contexts in sentences are closer to each other. A related idea in computer vision is the embedding of text in the semantic visual space attempted by @cite_28 @cite_41 based on large image datasets labeled with captions or class names. While these methods focus on different scenarios for embedding text, the aim of our work is to generate an embedding for video frames.
{ "cite_N": [ "@cite_28", "@cite_41", "@cite_8" ], "mid": [ "2123024445", "1527575280", "1614298861" ], "abstract": [ "Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.", "Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - \"blue\" + \"red\" is near images of red cars. Sample captions generated for 800 images are made available for comparison.", "" ] }
1505.00315
2950091256
In this paper, we propose to learn temporal embeddings of video frames for complex video analysis. Large quantities of unlabeled video data can be easily obtained from the Internet. These videos possess the implicit weak label that they are sequences of temporally and semantically coherent images. We leverage this information to learn temporal embeddings for video frames by associating frames with the temporal context that they appear in. To do this, we propose a scheme for incorporating temporal context based on past and future frames in videos, and compare this to other contextual representations. In addition, we show how data augmentation using multi-resolution samples and hard negatives helps to significantly improve the quality of the learned embeddings. We evaluate various design decisions for learning temporal embeddings, and show that our embeddings can improve performance for multiple video tasks such as retrieval, classification, and temporal order recovery in unconstrained Internet video.
In the previous sections, we introduced a model for representing context, and now move on to discuss the embedding function @math . In practice, the embedding function can be a CNN built from the frame pixels, or any underlying image or video representation. However, following the recent success of ImageNet trained CNN features for complex event videos @cite_5 @cite_30 , we choose to learn an embedding on top of the fully connected fc6 layer feature representation obtained by passing the frame through a standard CNN @cite_33 architecture. We use a simple model with a fully connected layer followed by a rectified linear unit (ReLU) and local response normalization (LRN) layer, with dropout regularization. In this architecture, the learned model parameters @math correspond to the weights and bias of our affine layer.
{ "cite_N": [ "@cite_30", "@cite_5", "@cite_33" ], "mid": [ "1670387562", "2950076437", "" ], "abstract": [ "We conduct an in-depth exploration of different strategies for doing event detection in videos using convolutional neural networks (CNNs) trained for image classification. We study different ways of performing spatial and temporal pooling, feature normalization, choice of CNN layers as well as choice of classifiers. Making judicious choices along these dimensions led to a very significant increase in performance over more naive approaches that have been used till now. We evaluate our approach on the challenging TRECVID MED'14 dataset with two popular CNN architectures pretrained on ImageNet. On this MED'14 dataset, our methods, based entirely on image-trained CNN features, can outperform several state-of-the-art non-CNN models. Our proposed late fusion of CNN- and motion-based features can further increase the mean average precision (mAP) on MED'14 from 34.95 to 38.74 . The fusion approach achieves the state-of-the-art classification performance on the challenging UCF-101 dataset.", "In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkit. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6 to 36.8 for the TRECVID MEDTest 14 dataset and from 34.0 to 44.6 for the TRECVID MEDTest 13 dataset. This work is the core part of the winning solution of our CMU-Informedia team in TRECVID MED 2014 competition.", "" ] }
1505.00315
2950091256
In this paper, we propose to learn temporal embeddings of video frames for complex video analysis. Large quantities of unlabeled video data can be easily obtained from the Internet. These videos possess the implicit weak label that they are sequences of temporally and semantically coherent images. We leverage this information to learn temporal embeddings for video frames by associating frames with the temporal context that they appear in. To do this, we propose a scheme for incorporating temporal context based on past and future frames in videos, and compare this to other contextual representations. In addition, we show how data augmentation using multi-resolution samples and hard negatives helps to significantly improve the quality of the learned embeddings. We evaluate various design decisions for learning temporal embeddings, and show that our embeddings can improve performance for multiple video tasks such as retrieval, classification, and temporal order recovery in unconstrained Internet video.
The context window size was set to @math , and the embedding dimension to @math . The learning rate was set to @math and gradually annealed in steps of @math . The training is typically completed within a day on 1 GPU with Caffe @cite_9 for a dataset of approximately @math videos. All videos were first down-sampled to @math fps before training. The embedding code as well as the learned models and video embeddings will be made publicly available upon publication.
{ "cite_N": [ "@cite_9" ], "mid": [ "2950094539" ], "abstract": [ "Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU ( @math 2.5 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia." ] }
1505.00107
2953120548
Randomness extractors and error correcting codes are fundamental objects in computer science. Recently, there have been several natural generalizations of these objects, in the context and study of tamper resilient cryptography. These are seeded non-malleable extractors, introduced in [DW09]; seedless non-malleable extractors, introduced in [CG14b]; and non-malleable codes, introduced in [DPW10]. However, explicit constructions of non-malleable extractors appear to be hard, and the known constructions are far behind their non-tampered counterparts. In this paper we make progress towards solving the above problems. Our contributions are as follows. (1) We construct an explicit seeded non-malleable extractor for min-entropy @math . This dramatically improves all previous results and gives a simpler 2-round privacy amplification protocol with optimal entropy loss, matching the best known result in [Li15b]. (2) We construct the first explicit non-malleable two-source extractor for min-entropy @math , with output size @math and error @math . (3) We initiate the study of two natural generalizations of seedless non-malleable extractors and non-malleable codes, where the sources or the codeword may be tampered many times. We construct the first explicit non-malleable two-source extractor with tampering degree @math up to @math , which works for min-entropy @math , with output size @math and error @math . We show that we can efficiently sample uniformly from any pre-image. By the connection in [CG14b], we also obtain the first explicit non-malleable codes with tampering degree @math up to @math , relative rate @math , and error @math .
As mentioned above, seeded non-malleable extractors were introduced by Dodis and Wichs in @cite_12 , to study the problem of privacy amplification with an active adversary.
{ "cite_N": [ "@cite_12" ], "mid": [ "2140805804" ], "abstract": [ "We study the question of basing symmetric key cryptography on weak secrets. In this setting, Alice and Bob share an n-bit secret W, which might not be uniformly random, but the adversary has at least k bits of uncertainty about it (formalized using conditional min-entropy). Since standard symmetric-key primitives require uniformly random secret keys, we would like to construct an authenticated key agreement protocol in which Alice and Bob use W to agree on a nearly uniform key R, by communicating over a public channel controlled by an active adversary Eve. We study this question in the information theoretic setting where the attacker is computationally unbounded. We show that single-round (i.e. one message) protocols do not work when k ≤ n 2, and require poor parameters even when n 2 On the other hand, for arbitrary values of k, we design a communication efficient two-round (challenge-response) protocol extracting nearly k random bits. This dramatically improves the previous construction of Renner and Wolf [32], which requires Θ(λ + log(n)) rounds where λ is the security parameter. Our solution takes a new approach by studying and constructing \"non-malleable\" seeded randomness extractors -- if an attacker sees a random seed X and comes up with an arbitrarily related seed X', then we bound the relationship between R= Ext(W;X) and R' = Ext(W;X'). We also extend our two-round key agreement protocol to the \"fuzzy\" setting, where Alice and Bob share \"close\" (but not equal) secrets WA and WB, and to the Bounded Retrieval Model (BRM) where the size of the secret W is huge." ] }
1505.00107
2953120548
Randomness extractors and error correcting codes are fundamental objects in computer science. Recently, there have been several natural generalizations of these objects, in the context and study of tamper resilient cryptography. These are seeded non-malleable extractors, introduced in [DW09]; seedless non-malleable extractors, introduced in [CG14b]; and non-malleable codes, introduced in [DPW10]. However, explicit constructions of non-malleable extractors appear to be hard, and the known constructions are far behind their non-tampered counterparts. In this paper we make progress towards solving the above problems. Our contributions are as follows. (1) We construct an explicit seeded non-malleable extractor for min-entropy @math . This dramatically improves all previous results and gives a simpler 2-round privacy amplification protocol with optimal entropy loss, matching the best known result in [Li15b]. (2) We construct the first explicit non-malleable two-source extractor for min-entropy @math , with output size @math and error @math . (3) We initiate the study of two natural generalizations of seedless non-malleable extractors and non-malleable codes, where the sources or the codeword may be tampered many times. We construct the first explicit non-malleable two-source extractor with tampering degree @math up to @math , which works for min-entropy @math , with output size @math and error @math . We show that we can efficiently sample uniformly from any pre-image. By the connection in [CG14b], we also obtain the first explicit non-malleable codes with tampering degree @math up to @math , relative rate @math , and error @math .
Here, while one can still design protocols for an active adversary, the major goal is to design a protocol that uses as few number of interactions as possible, and output a uniform random string @math that has length as close to @math as possible (the difference is called ). When the entropy rate of @math is large, i.e., bigger than @math , there exist protocols that take only one round (e.g., @cite_39 , @cite_36 ). However these protocols all have very large entropy loss. On the other hand, @cite_12 showed that when the entropy rate of @math is smaller than @math , then no one round protocol exists; furthermore the length of @math has to be at least @math smaller than @math . Thus, the natural goal is to design a two-round protocol with such optimal entropy loss. There has been a lot of effort along this line @cite_39 , @cite_36 , @cite_12 , @cite_42 , @cite_40 , @cite_5 , @cite_28 , @cite_38 , @cite_52 , @cite_27 . However, all protocols before the work of @cite_28 either need to use @math rounds, or need to incur an entropy loss of @math .
{ "cite_N": [ "@cite_38", "@cite_36", "@cite_28", "@cite_42", "@cite_52", "@cite_39", "@cite_40", "@cite_27", "@cite_5", "@cite_12" ], "mid": [ "", "2070616660", "", "2163122958", "", "2167765756", "2097079400", "", "", "2140805804" ], "abstract": [ "", "Consider two parties holding samples from correlated distributions @math and @math , respectively, where these samples are within distance @math of each other in some metric space. The parties wish to agree on a close-to-uniformly distributed secret key @math by sending a single message over an insecure channel controlled by an all-powerful adversary who may read and modify anything sent over the channel. We consider both the keyless case, where the parties share no additional secret information, and the keyed case, where the parties share a long-term secret @math that they can use to generate a sequence of session keys @math using multiple pairs @math . The former has applications to, e.g., biometric authentication, while the latter arises in, e.g., the bounded-storage model with errors. We show solutions that improve upon previous work in several respects. The best prior solution for the keyless case with no errors (i.e., @math ) requires the min-entropy of @math to exceed @math , where @math is the bit length of @math . Our solution applies whenever the min-entropy of @math exceeds the minimal threshold @math , and yields a longer key.", "", "Unconditional cryptographic security cannot be generated simply from scratch, but must be based on some given primitive to start with (such as, most typically, a private key). Whether or not this implies that such a high level of security is necessarily impractical depends on how weak these basic primitives can be, and how realistic it is therefore to realize or find them in|classical or quantum|reality. A natural way of minimizing the required resources for information-theoretic security is to reduce the length of the private key. In this paper, we focus on the level of its secrecy instead and show that even if the communication channel is completely insecure, a shared string of which an arbitrarily large fraction is known to the adversary can be used for achieving fundamental cryptographic goals such as message authentication and encryption. More precisely, we give protocols|using such a weakly secret key|allowing for both the exchange of authenticated messages and the extraction of the key’s entire amount of privacy into a shorter virtually secret key. Our schemes, which are highly interactive, show the power of two-way communication in this context: Under the given conditions, the same objectives cannot be achieved by one-way communication only.", "", "Privacy amplification allows two parties Alice and Bob knowing a partially secret string S to extract, by communication over a public channel, a shorter, highly secret string S'. Bennett, Brassard, Crepeau, and Maurer showed that the length of S' can be almost equal to the conditional Renyi entropy of S given an opponent Eve's knowledge. All previous results on privacy amplification assumed that Eve has access to the public channel but is passive or, equivalently, that messages inserted by Eve can be detected by Alice and Bob. In this paper we consider privacy amplification secure even against active opponents. First it is analyzed under what conditions information-theoretically secure authentication is possible even though the common key is only partially secret. This result is used to prove that privacy amplification can be secure against an active opponent and that the size of S' can be almost equal to Eve's min-entropy about S minus 2n 3 if 5 is an n-bit string. Moreover, it is shown that for sufficiently large n privacy amplification is possible when Eve's min-entropy about S exceeds only n 2 rather than 2n 3.", "We consider information-theoretic key agreement between two parties sharing somewhat different versions of a secret w that has relatively little entropy. Such key agreement, also known as information reconciliation and privacy amplification over unsecured channels, was shown to be theoretically feasible by Renner and Wolf (Eurocrypt 2004), although no protocol that runs in polynomial time was described. We propose a protocol that is not only polynomial-time, but actually practical, requiring only a few seconds on consumer-grade computers. Our protocol can be seen as an interactive version of robust fuzzy extractors (, Crypto 2006). While robust fuzzy extractors, due to their noninteractive nature, require w to have entropy at least half its length, we have no such constraint. In fact, unlike in prior solutions, in our solution the entropy loss is essentially unrelated to the length or the entropy of w , and depends only on the security parameter.", "", "", "We study the question of basing symmetric key cryptography on weak secrets. In this setting, Alice and Bob share an n-bit secret W, which might not be uniformly random, but the adversary has at least k bits of uncertainty about it (formalized using conditional min-entropy). Since standard symmetric-key primitives require uniformly random secret keys, we would like to construct an authenticated key agreement protocol in which Alice and Bob use W to agree on a nearly uniform key R, by communicating over a public channel controlled by an active adversary Eve. We study this question in the information theoretic setting where the attacker is computationally unbounded. We show that single-round (i.e. one message) protocols do not work when k ≤ n 2, and require poor parameters even when n 2 On the other hand, for arbitrary values of k, we design a communication efficient two-round (challenge-response) protocol extracting nearly k random bits. This dramatically improves the previous construction of Renner and Wolf [32], which requires Θ(λ + log(n)) rounds where λ is the security parameter. Our solution takes a new approach by studying and constructing \"non-malleable\" seeded randomness extractors -- if an attacker sees a random seed X and comes up with an arbitrarily related seed X', then we bound the relationship between R= Ext(W;X) and R' = Ext(W;X'). We also extend our two-round key agreement protocol to the \"fuzzy\" setting, where Alice and Bob share \"close\" (but not equal) secrets WA and WB, and to the Bounded Retrieval Model (BRM) where the size of the secret W is huge." ] }
1505.00274
2951006487
Expectation maximization (EM) has recently been shown to be an efficient algorithm for learning finite-state controllers (FSCs) in large decentralized POMDPs (Dec-POMDPs). However, current methods use fixed-size FSCs and often converge to maxima that are far from optimal. This paper considers a variable-size FSC to represent the local policy of each agent. These variable-size FSCs are constructed using a stick-breaking prior, leading to a new framework called (Dec-SBPR). This approach learns the controller parameters with a variational Bayesian algorithm without having to assume that the Dec-POMDP model is available. The performance of Dec-SBPR is demonstrated on several benchmark problems, showing that the algorithm scales to large problems while outperforming other state-of-the-art methods.
The proof of Theorem in the single-agent case ( @math ) has been given in @cite_14 that the empirical value function ). Proof for @math is similar and is omitted here.
{ "cite_N": [ "@cite_14" ], "mid": [ "2114235770" ], "abstract": [ "We consider the problem of multi-task reinforcement learning (MTRL) in multiple partially observable stochastic environments. We introduce the regionalized policy representation (RPR) to characterize the agent's behavior in each environment. The RPR is a parametric model of the conditional distribution over current actions given the history of past actions and observations; the agent's choice of actions is directly based on this conditional distribution, without an intervening model to characterize the environment itself. We propose off-policy batch algorithms to learn the parameters of the RPRs, using episodic data collected when following a behavior policy, and show their linkage to policy iteration. We employ the Dirichlet process as a nonparametric prior over the RPRs across multiple environments. The intrinsic clustering property of the Dirichlet process imposes sharing of episodes among similar environments, which effectively reduces the number of episodes required for learning a good policy in each environment, when data sharing is appropriate. The number of distinct RPRs and the associated clusters (the sharing patterns) are automatically discovered by exploiting the episodic data as well as the nonparametric nature of the Dirichlet process. We demonstrate the effectiveness of the proposed RPR as well as the RPR-based MTRL framework on various problems, including grid-world navigation and multi-aspect target classification. The experimental results show that the RPR is a competitive reinforcement learning algorithm in partially observable domains, and the MTRL consistently achieves better performance than single task reinforcement learning." ] }
1505.00277
1803270570
We present an approach for the detection of coordinate-term relationships between entities from the software domain, that refer to Java classes. Usually, relations are found by examining corpus statistics associated with text entities. In some technical domains, however, we have access to additional information about the real-world objects named by the entities, suggesting that coupling information about the "grounded" entities with corpus statistics might lead to improved methods for relation discovery. To this end, we develop a similarity measure for Java classes using distributional information about how they are used in software, which we combine with corpus statistics on the distribution of contexts in which the classes appear in text. Using our approach, cross-validation accuracy on this dataset can be improved dramatically, from around 60 to 88 . Human labeling results show that our classifier has an F1 score of 86 over the top 1000 predicted pairs.
Previous work on semantic relation discovery, in particular, coordinate term discovery, has used two main approaches. The first is based on the insight that certain lexical patterns indicate a semantic relationship with high-precision, as initially observed by Hearst @cite_40 . For example, the conjuction pattern X and Y'' indicates that @math and @math are coordinate terms. Other pattern-based classifier have been introduced for meronyms @cite_39 , synonyms @cite_25 , and general analogy relations @cite_18 . The second approach relies on the notion that words that appear in a similar context are likely to be semantically similar. In contrast to pattern based classifiers, context distributional similarity approaches are normally higher in recall. @cite_22 @cite_38 @cite_6 @cite_30 . In this work we attempt to label samples extracted with high-precision Hearst patterns, using information from higher-recall methods.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_18", "@cite_22", "@cite_6", "@cite_39", "@cite_40", "@cite_25" ], "mid": [ "2142086811", "", "2102515914", "2950928021", "2107229268", "2167061159", "2068737686", "1554237613" ], "abstract": [ "Semantic taxonomies such as WordNet provide a rich source of knowledge for natural language processing applications, but are expensive to build, maintain, and extend. Motivated by the problem of automatically constructing and extending such taxonomies, in this paper we present a new algorithm for automatically learning hypernym (is-a) relations from text. Our method generalizes earlier work that had relied on using small numbers of hand-crafted regular expression patterns to identify hypernym pairs. Using \"dependency path\" features extracted from parse trees, we introduce a general-purpose formalization and generalization of these patterns. Given a training set of text containing known hypernym pairs, our algorithm automatically extracts useful dependency paths and applies them to new corpora to identify novel pairs. On our evaluation task (determining whether two nouns in a news article participate in a hypernym relationship), our automatically extracted database of hypernyms attains both higher precision and higher recall than WordNet.", "", "Existing statistical approaches to natural language problems are very coarse approximations to the true complexity of language processing. As such, no single technique will be best for all problem instances. Many researchers are examining ensemble methods that combine the output of successful, separately developed modules to create more accurate solutions. This paper examines three merging rules for combining probability distributions: the well known mixture rule, the logarithmic rule, and a novel product rule. These rules were applied with state-of-the-art results to two problems commonly used to assess human mastery of lexical semantics|synonym questions and analogy questions. All three merging rules result in ensembles that are more accurate than any of their component modules. The dierences among the three rules are not statistically signicant, but it is suggestive that the popular mixture rule is not the best rule for either of the two problems.", "We describe and experimentally evaluate a method for automatically clustering words according to their distribution in particular syntactic contexts. Deterministic annealing is used to find lowest distortion sets of clusters. As the annealing parameter increases, existing clusters become unstable and subdivide, yielding a hierarchical soft'' clustering of the data. Clusters are used as the basis for class models of word coocurrence, and the models evaluated with respect to held-out test data.", "Lexical-semantic resources, including thesauri and WORDNET, have been successfully incorporated into a wide range of applications in Natural Language Processing. However they are very difficult and expensive to create and maintain, and their usefulness has been severely hampered by their limited coverage, bias and inconsistency. Automated and semi-automated methods for developing such resources are therefore crucial for further resource development and improved application performance. Systems that extract thesauri often identify similar words using the distributional hypothesis that similar words appear in similar contexts. This approach involves using corpora to examine the contexts each word appears in and then calculating the similarity between context distributions. Different definitions of context can be used, and I begin by examining how different types of extracted context influence similarity. To be of most benefit these systems must be capable of finding synonyms for rare words. Reliable context counts for rare events can only be extracted from vast collections of text. In this dissertation I describe how to extract contexts from a corpus of over 2 billion words. I describe techniques for processing text on this scale and examine the trade-off between context accuracy, information content and quantity of text analysed. Distributional similarity is at best an approximation to semantic similarity. I develop improved approximations motivated by the intuition that some events in the context distribution are more indicative of meaning than others. For instance, the object-of-verb context wear is far more indicative of a clothing noun than get. However, existing distributional techniques do not effectively utilise this information. The new context-weighted similarity metric I propose in this dissertation significantly outperforms every distributional similarity metric described in the literature. Nearest-neighbour similarity algorithms scale poorly with vocabulary and context vector size. To overcome this problem I introduce a new context-weighted approximation algorithm with bounded complexity in context vector size that significantly reduces the system runtime with only a minor performance penalty. I also describe a parallelized version of the system that runs on a Beowulf cluster for the 2 billion word experiments. To evaluate the context-weighted similarity measure I compare ranked similarity lists against gold-standard resources using precision and recall-based measures from Information Retrieval,", "The discovery of semantic relations from text becomes increasingly important for applications such as Question Answering, Information Extraction, Text Summarization, Text Understanding, and others. The semantic relations are detected by checking selectional constraints. This paper presents a method and its results for learning semantic constraints to detect part-whole relations. Twenty constraints were found. Their validity was tested on a 10,000 sentence corpus, and the targeted part-whole relations were detected with an accuracy of 83 .", "We describe a method for the automatic acquisition of the hyponymy lexical relation from unrestricted text. Two goals motivate the approach: (i) avoidance of the need for pre-encoded knowledge and (ii) applicability across a wide range of text. We identify a set of lexico-syntactic patterns that are easily recognizable, that occur frequently and across text genre boundaries, and that indisputably indicate the lexical relation of interest. We describe a method for discovering these patterns and suggest that other lexical relations will also be acquirable in this way. A subset of the acquisition algorithm is implemented and the results are used to augment and critique the structure of a large hand-built thesaurus. Extensions and applications to areas such as information retrieval are suggested.", "There have been many proposals to compute similarities between words based on their distributions in contexts. However, these approaches do not distinguish between synonyms and antonyms. We present two methods for identifying synonyms among distributionally similar words." ] }
1505.00277
1803270570
We present an approach for the detection of coordinate-term relationships between entities from the software domain, that refer to Java classes. Usually, relations are found by examining corpus statistics associated with text entities. In some technical domains, however, we have access to additional information about the real-world objects named by the entities, suggesting that coupling information about the "grounded" entities with corpus statistics might lead to improved methods for relation discovery. To this end, we develop a similarity measure for Java classes using distributional information about how they are used in software, which we combine with corpus statistics on the distribution of contexts in which the classes appear in text. Using our approach, cross-validation accuracy on this dataset can be improved dramatically, from around 60 to 88 . Human labeling results show that our classifier has an F1 score of 86 over the top 1000 predicted pairs.
The aim of grounded language learning methods is to learn a mapping between natural language (words and sentences) and the observed world @cite_7 @cite_43 @cite_5 , where more recent work includes grounding language to the physical world @cite_28 , and grounding of entire discourses @cite_26 . Early work in this field relied on supervised aligned sentence-to-meaning data @cite_23 @cite_19 . However, in later work the supervision constraint has been gradually relaxed @cite_8 @cite_3 . Relative to prior work on grounded language acquisition, we use a very rich and complex representation of entities and their relationships (through software code). However, we consider a very constrained language task, namely coordinate term discovery.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_8", "@cite_28", "@cite_3", "@cite_43", "@cite_19", "@cite_23", "@cite_5" ], "mid": [ "2187430030", "2160783091", "2112999531", "2186283982", "2116716943", "1481820510", "2148833958", "1496189301", "" ], "abstract": [ "Grounded language learning, the task of mapping from natural language to a representation of meaning, has attracted more and more interest in recent years. In most work on this topic, however, utterances in a conversation are treated independently and discourse structure information is largely ignored. In the context of language acquisition, this independence assumption discards cues that are important to the learner, e.g., the fact that consecutive utterances are likely to share the same referent (, 2013). The current paper describes an approach to the problem of simultaneously modeling grounded language at the sentence and discourse levels. We combine ideas from parsing and grammar induction to produce a parser that can handle long input strings with thousands of tokens, creating parse trees that represent full discourses. By casting grounded language learning as a grammatical inference task, we use our parser to extend the work of (2012), investigating the importance of discourse continuity in children’s language acquisition and its interaction with social cues. Our model boosts performance in a language acquisition task and yields good discourse segmentations compared with human annotators.", "This paper presents a computational study of part of the lexical-acquisition task faced by children, namely the acquisition of word-to-meaning mappings. It first approximates this task as a formal mathematical problem. It then presents an implemented algorithm for solving this problem, illustrating its operation on a small example. This algorithm offers one precise interpretation of the intuitive notions of cross-situational learning and the principle of contrast applied between words in an utterance. It robustly learns a homonymous lexicon despite noisy multi-word input, in the presence of referential uncertainty, with no prior knowledge that is specific to the language being learned. Computational simulations demonstrate the robustness of this algorithm and illustrate how algorithms based on cross-situational learning and the principle of contrast might be able to solve lexicalacquisition problems of the size faced by children, under weak, worst-case assumptions about the type and quantity of data available.", "This paper presents a method for learning a semantic parser from ambiguous supervision. Training data consists of natural language sentences annotated with multiple potential meaning representations, only one of which is correct. Such ambiguous supervision models the type of supervision that can be more naturally available to language-learning systems. Given such weak supervision, our approach produces a semantic parser that maps sentences into meaning representations. An existing semantic parsing learning system that can only learn from unambiguous supervision is augmented to handle ambiguous supervision. Experimental results show that the resulting system is able to cope up with ambiguities and learn accurate semantic parsers.", "This paper introduces Logical Semantics with Perception (LSP), a model for grounded language acquisition that learns to map natural language statements to their referents in a physical environment. For example, given an image, LSP can map the statement “blue mug on the table” to the set of image segments showing blue mugs on tables. LSP learns physical representations for both categorical (“blue,” “mug”) and relational (“on”) language, and also learns to compose these representations to produce the referents of entire statements. We further introduce a weakly supervised training procedure that estimates LSP’s parameters using annotated referents for entire statements, without annotated referents for individual words or the parse structure of the statement. We perform experiments on two applications: scene understanding and geographical question answering. We find that LSP outperforms existing, less expressive models that cannot represent relational language. We further find that weakly supervised training is competitive with fully supervised training while requiring significantly less annotation effort.", "A central problem in grounded language acquisition is learning the correspondences between a rich world state and a stream of text which references that world state. To deal with the high degree of ambiguity present in this setting, we present a generative model that simultaneously segments the text into utterances and maps each utterance to a meaning representation grounded in the world state. We show that our model generalizes across three domains of increasing difficulty---Robocup sportscasting, weather forecasts (a new domain), and NFL recaps.", "This paper presents a multimodal learning system that can ground spoken names of objects in their physical referents and learn to recognize those objects simultaneously from naturally co-occurring multisensory input. There are two technical problems involved: (1) the correspondence problem in symbol grounding - how to associate words (symbols) with their perceptually grounded meanings from multiple cooccurrences between words and objects in the physical environment. (2) object learning - how to recognize and categorize visual objects. We argue that those two problems can be fundamentally simplified by considering them in a general system and incorporating the spatio-temporal and cross-modal constraints of multimodal data. The system collects egocentric data including image sequences as well as speech while users perform natural tasks. It is able to automatically infer the meanings of object names from vision, and categorize objects based on teaching signals potentially encoded in speech. The experimental results reported in this paper reveal the effectiveness of using multimodal data and integrating heterogeneous techniques in machine learning, natural language processing and computer vision.", "We introduce a learning semantic parser, Scissor, that maps natural-language sentences to a detailed, formal, meaning-representation language. It first uses an integrated statistical parser to produce a semantically augmented parse tree, in which each non-terminal node has both a syntactic and a semantic label. A compositional-semantics procedure is then used to map the augmented parse tree into a final meaning representation. We evaluate the system in two domains, a natural-language database interface and an interpreter for coaching instructions in robotic soccer. We present experimental results demonstrating that Scissor produces more accurate semantic representations than several previous approaches.", "This paper addresses the problem of mapping natural language sentences to lambda–calculus encodings of their meaning. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. We apply the method to the task of learning natural language interfaces to databases and show that the learned parsers outperform previous methods in two benchmark database domains.", "" ] }
1505.00277
1803270570
We present an approach for the detection of coordinate-term relationships between entities from the software domain, that refer to Java classes. Usually, relations are found by examining corpus statistics associated with text entities. In some technical domains, however, we have access to additional information about the real-world objects named by the entities, suggesting that coupling information about the "grounded" entities with corpus statistics might lead to improved methods for relation discovery. To this end, we develop a similarity measure for Java classes using distributional information about how they are used in software, which we combine with corpus statistics on the distribution of contexts in which the classes appear in text. Using our approach, cross-validation accuracy on this dataset can be improved dramatically, from around 60 to 88 . Human labeling results show that our classifier has an F1 score of 86 over the top 1000 predicted pairs.
In recent work by NLP and software engineering researchers, statistical language models have been adapted for modeling software code. NLP models have been used to enhance a variety of software development tasks such as code and comment token completion @cite_36 @cite_24 @cite_35 @cite_17 , analysis of code variable names @cite_44 @cite_27 , and mining software repositories @cite_15 . This has been complemented by work from the programming language research community for structured prediction of code syntax trees @cite_12 . To the best of our knowledge, there is no prior work on discovering semantic relations for software entities.
{ "cite_N": [ "@cite_35", "@cite_36", "@cite_44", "@cite_24", "@cite_27", "@cite_15", "@cite_12", "@cite_17" ], "mid": [ "2126793110", "2122076271", "", "2062068644", "", "2093938715", "2020193886", "" ], "abstract": [ "Statistical language models have successfully been used to describe and analyze natural language documents. Recent work applying language models to programming languages is focused on the task of predicting code, while mainly ignoring the prediction of programmer comments. In this work, we predict comments from JAVA source files of open source projects, using topic models and n-grams, and we analyze the performance of the models given varying amounts of background data on the project being predicted. We evaluate models on their comment-completion capability in a setting similar to codecompletion tools built into standard code editors, and show that using a comment completion tool can save up to 47 of the comment typing.", "Abbreviation Completion is a novel technique to improve the efficiency of code-writing by supporting code completion of multiple keywords based on non-predefined abbreviated input -- a different approach from conventional code completion that finds one keyword at a time based on an exact character match. Abbreviated input is expanded into keywords by a Hidden Markov Model learned from a corpus of existing code. The technique does not require the user to memorize abbreviations and provides incremental feedback of the most likely completions. This paper presents the algorithm for abbreviation completion, integrated with a new user interface for multiple-keyword completion. We tested the system by sampling 3000 code lines from open source projects and found that more than 98 of the code lines could be resolved from acronym-like abbreviations. A user study found 30 reduction in time usage and 41 reduction of keystrokes over conventional code completion.", "", "This paper investigates the use of a natural language processing technique that automatically detects project-specific code templates (i.e., frequently used code blocks), which can be made available to software developers within an integrated development environment. During software development, programmers often and in some cases unknowingly rewrite the same code block that represents some functionality. These frequently used code blocks can inform the existence and possible use of code templates. Many existing code editors support code templates, but programmers are expected to manually define these templates and subsequently add them as templates in the editor. Furthermore, the support of editors to provide templates based on the editing context is still limited. The use of n-gram language models within the context of software development is described and evaluated to overcome these restrictions. The technique can search for project-specific code templates and present these templates to the programmer based on the current editing context.", "", "Program specifications are important for many tasks during software design, development, and maintenance. Among these, temporal specifications are particularly useful. They express formal correctness requirements of an application's ordering of specific actions and events during execution, such as the strict alternation of acquisition and release of locks. Despite their importance, temporal specifications are often missing, incomplete, or described only informally. Many techniques have been proposed that mine such specifications from execution traces or program source code. However, existing techniques mine only simple patterns, or they mine a single complex pattern that is restricted to a particular set of manually selected events. There is no practical, automatic technique that can mine general temporal properties from execution traces. In this paper, we present Javert, the first general specification mining framework that can learn, fully automatically, complex temporal properties from execution traces. The key insight behind Javert is that real, complex specifications can be formed by composing instances of small generic patterns, such as the alternating pattern ((ab)) and the resource usage pattern ((ab c)). In particular, Javert learns simple generic patterns and composes them using sound rules to construct large, complex specifications. We have implemented the algorithm in a practical tool and conducted an extensive empirical evaluation on several open source software projects. Our results are promising; they show that Javert is scalable, general, and precise. It discovered many interesting, nontrivial specifications in real-world code that are beyond the reach of existing automatic techniques.", "Statistical models of source code can be used to improve code completion systems, assistive interfaces, and code compression engines. We are developing a statistical model where programs are represented as syntax trees, rather than simply a stream of tokens. Our model, initially for the Java language, combines corpus data with information about syn- tax, types and the program context. We tested this model using open source code corpuses and find that our model is significantly more accurate than the current state of the art, providing initial evidence for our claim that combining structural and statistical information is a fruitful strategy.", "" ] }
1505.00076
1566257247
A recent approach in modeling and analysis of the supply and demand in heterogeneous wireless cellular networks has been the use of two independent Poisson point processes (PPPs) for the locations of base stations (BSs) and user equipments (UEs). This popular approach has two major shortcomings. First, although the PPP model may be a fitting one for the BS locations, it is less adequate for the UE locations mainly due to the fact that the model is not adjustable (tunable) to represent the severity of the heterogeneity (non-uniformity) in the UE locations. Besides, the independence assumption between the two PPPs does not capture the often-observed correlation between the UE and BS locations. This paper presents a novel heterogeneous spatial traffic modeling which allows statistical adjustment. Simple and non-parameterized, yet sufficiently accurate, measures for capturing the traffic characteristics in space are introduced. Only two statistical parameters related to the UE distribution, namely, the coefficient of variation (the normalized second-moment), of an appropriately defined inter-UE distance measure, and correlation coefficient (the normalized cross-moment) between UE and BS locations, are adjusted to control the degree of heterogeneity and the bias towards the BS locations, respectively. This model is used in heterogeneous wireless cellular networks (HetNets) to demonstrate the impact of heterogeneous and BS-correlated traffic on the network performance. This network is called HetHetNet since it has two types of heterogeneity: heterogeneity in the infrastructure (supply), and heterogeneity in the spatial traffic distribution (demand).
In @cite_51 , , presented an algorithm to create a random inhomogeneous node distribution based on a neighborhood-dependent thinning approach in a homogeneous PPP. The model, however, can not be used for generating BS-correlated UE patterns, as this is beyond the scope of that model.
{ "cite_N": [ "@cite_51" ], "mid": [ "2157612079" ], "abstract": [ "Most analysis and simulation of wireless systems assumes that the nodes are randomly located, sampled from a uniform distribution. Although in many real-world scenarios the nodes are non-uniformly distributed, the research community lacks a common approach to generate such inhomogeneities. This paper intends to go a step in this direction. We present an algorithm to create a random inhomogeneous node distribution based on a simple neighborhood-dependent thinning of a homogeneous Poisson process. We derive some useful stochastic properties of the resulting distribution (in particular the probability density of the nearest neighbor distance) and offer a reference implementation. Our goal is to enable fellow researchers to easily use inhomogeneous distributions with well-defined stochastic properties." ] }
1505.00076
1566257247
A recent approach in modeling and analysis of the supply and demand in heterogeneous wireless cellular networks has been the use of two independent Poisson point processes (PPPs) for the locations of base stations (BSs) and user equipments (UEs). This popular approach has two major shortcomings. First, although the PPP model may be a fitting one for the BS locations, it is less adequate for the UE locations mainly due to the fact that the model is not adjustable (tunable) to represent the severity of the heterogeneity (non-uniformity) in the UE locations. Besides, the independence assumption between the two PPPs does not capture the often-observed correlation between the UE and BS locations. This paper presents a novel heterogeneous spatial traffic modeling which allows statistical adjustment. Simple and non-parameterized, yet sufficiently accurate, measures for capturing the traffic characteristics in space are introduced. Only two statistical parameters related to the UE distribution, namely, the coefficient of variation (the normalized second-moment), of an appropriately defined inter-UE distance measure, and correlation coefficient (the normalized cross-moment) between UE and BS locations, are adjusted to control the degree of heterogeneity and the bias towards the BS locations, respectively. This model is used in heterogeneous wireless cellular networks (HetNets) to demonstrate the impact of heterogeneous and BS-correlated traffic on the network performance. This network is called HetHetNet since it has two types of heterogeneity: heterogeneity in the infrastructure (supply), and heterogeneity in the spatial traffic distribution (demand).
, in @cite_20 , proposed a non-uniform UE distribution model. They start with a higher density of BSs. Then they consider a typical UE located at the origin. After selecting the serving BS, they condition on this active link and independently thin the rest of the BS point process so that the resulting density matches the desired density of the actual BSs. It should be pointed out that the situations in which UEs are clustered, but not necessarily around BSs, are not captured by this method.
{ "cite_N": [ "@cite_20" ], "mid": [ "2042916860" ], "abstract": [ "A recent way to model and analyze downlink cellular networks is by using random spatial models. Assuming user equipment (UE) distribution to be uniform, the analysis is performed at a typical UE located at the origin. At least one shortcoming of this approach is its inability to model non-uniform UE distributions, especially when there is dependence in the UE and the base station (BS) locations. To facilitate analysis in such cases, we propose a new tractable method of sampling UEs by conditionally thinning the BS point process and show that the resulting framework can be used as a tractable generative model to study current capacity-centric deployments, where the UEs are more likely to lie closer to the BSs." ] }
1505.00076
1566257247
A recent approach in modeling and analysis of the supply and demand in heterogeneous wireless cellular networks has been the use of two independent Poisson point processes (PPPs) for the locations of base stations (BSs) and user equipments (UEs). This popular approach has two major shortcomings. First, although the PPP model may be a fitting one for the BS locations, it is less adequate for the UE locations mainly due to the fact that the model is not adjustable (tunable) to represent the severity of the heterogeneity (non-uniformity) in the UE locations. Besides, the independence assumption between the two PPPs does not capture the often-observed correlation between the UE and BS locations. This paper presents a novel heterogeneous spatial traffic modeling which allows statistical adjustment. Simple and non-parameterized, yet sufficiently accurate, measures for capturing the traffic characteristics in space are introduced. Only two statistical parameters related to the UE distribution, namely, the coefficient of variation (the normalized second-moment), of an appropriately defined inter-UE distance measure, and correlation coefficient (the normalized cross-moment) between UE and BS locations, are adjusted to control the degree of heterogeneity and the bias towards the BS locations, respectively. This model is used in heterogeneous wireless cellular networks (HetNets) to demonstrate the impact of heterogeneous and BS-correlated traffic on the network performance. This network is called HetHetNet since it has two types of heterogeneity: heterogeneity in the infrastructure (supply), and heterogeneity in the spatial traffic distribution (demand).
In @cite_13 we proposed new measures for capturing traffic characteristics in the space domain. The proposed measures can be considered as the analogues of in the time domain. Thomas point process was used to generate spatial traffic patterns with desired characteristics. However, the HetNet scenarios are not investigated in @cite_13 .
{ "cite_N": [ "@cite_13" ], "mid": [ "2094331586" ], "abstract": [ "Understanding and solving performance-related issues of current and future (5G+) networks requires the availability of realistic, yet simple and manageable, traffic models which capture and regenerate various properties of real traffic with sufficient accuracy and minimum number of parameters. Traffic in wireless cellular networks must be modeled in the space domain as well as the time domain. Modeling traffic in the time domain has been investigated well. However, for modeling the User Equipment (UE) distribution in the space domain, either the unrealistic uniform Poisson model, or some non-adjustable model, or specifc data from operators, is commonly used. In this paper, stochastic geometry is used to explain the similarities of traffic modeling in the time domain and the space domain. It is shown that traffic modeling in the time domain is a special one-dimensional case of traffic modeling in the space domain. Unified and non-parameterized metrics for characterizing the heterogeneity of traffic in the time domain and the space domain are proposed and their equivalence to the inter-arrival time, a well accepted metric in the time domain, is demonstrated. Coefficient of Variation (CoV), the normalized second-order statistic, is suggested as an appropriate statistical property of traffic to be measured. Simulation results show that the proposed metrics capture the properties of traffic more accurately than the existing metrics. Finally, the performance of LTE networks under modeled traffic using the new metrics is illustrated." ] }
1505.00076
1566257247
A recent approach in modeling and analysis of the supply and demand in heterogeneous wireless cellular networks has been the use of two independent Poisson point processes (PPPs) for the locations of base stations (BSs) and user equipments (UEs). This popular approach has two major shortcomings. First, although the PPP model may be a fitting one for the BS locations, it is less adequate for the UE locations mainly due to the fact that the model is not adjustable (tunable) to represent the severity of the heterogeneity (non-uniformity) in the UE locations. Besides, the independence assumption between the two PPPs does not capture the often-observed correlation between the UE and BS locations. This paper presents a novel heterogeneous spatial traffic modeling which allows statistical adjustment. Simple and non-parameterized, yet sufficiently accurate, measures for capturing the traffic characteristics in space are introduced. Only two statistical parameters related to the UE distribution, namely, the coefficient of variation (the normalized second-moment), of an appropriately defined inter-UE distance measure, and correlation coefficient (the normalized cross-moment) between UE and BS locations, are adjusted to control the degree of heterogeneity and the bias towards the BS locations, respectively. This model is used in heterogeneous wireless cellular networks (HetNets) to demonstrate the impact of heterogeneous and BS-correlated traffic on the network performance. This network is called HetHetNet since it has two types of heterogeneity: heterogeneity in the infrastructure (supply), and heterogeneity in the spatial traffic distribution (demand).
In @cite_8 we proposed a novel methodology for the statistical modeling of spatial traffic in wireless cellular networks. The proposed traffic modeling considered the cross-correlation between the UEs and BS locations as well as the CoV as defined in @cite_13 . The proposed traffic generation method was a density based method with two phases. First, a BS-biased non-uniform density function for the entire field was generated, then the desired point pattern was produced based on that density function. It should be noted, however, that the generation of the density function for all the points in the field (as required in the method proposed in @cite_8 ) is computationally intensive. Moreover, the model proposed in @cite_8 is not directly applicable on HetNets because the density function is calculated for a homogeneous macro-only scenario.
{ "cite_N": [ "@cite_13", "@cite_8" ], "mid": [ "2094331586", "1966517505" ], "abstract": [ "Understanding and solving performance-related issues of current and future (5G+) networks requires the availability of realistic, yet simple and manageable, traffic models which capture and regenerate various properties of real traffic with sufficient accuracy and minimum number of parameters. Traffic in wireless cellular networks must be modeled in the space domain as well as the time domain. Modeling traffic in the time domain has been investigated well. However, for modeling the User Equipment (UE) distribution in the space domain, either the unrealistic uniform Poisson model, or some non-adjustable model, or specifc data from operators, is commonly used. In this paper, stochastic geometry is used to explain the similarities of traffic modeling in the time domain and the space domain. It is shown that traffic modeling in the time domain is a special one-dimensional case of traffic modeling in the space domain. Unified and non-parameterized metrics for characterizing the heterogeneity of traffic in the time domain and the space domain are proposed and their equivalence to the inter-arrival time, a well accepted metric in the time domain, is demonstrated. Coefficient of Variation (CoV), the normalized second-order statistic, is suggested as an appropriate statistical property of traffic to be measured. Simulation results show that the proposed metrics capture the properties of traffic more accurately than the existing metrics. Finally, the performance of LTE networks under modeled traffic using the new metrics is illustrated.", "Future generation (5G and beyond) cellular networks have to deal not only with an extreme traffic demand increase, but also an extreme level of heterogeneity in the distribution of that demand in both space and time. Traffic modeling in the time domain has been investigated well in the literature. In the space domain, however, there is a lack of statistical models for the heterogeneous User Equipment (UE) distribution beyond the classical Poisson Point Process (PPP) model. In this paper, we introduce a methodology for the generation and analysis of spatial traffic which allows statistical adjustments. Only two parameters, namely, Coefficient of Variation (CoV) and Correlation Coefficient, are adjusted to control the UE distribution heterogeneity and correlation with Base Stations (BSs). The methodology is applied to cellular networks to show the impact of heterogeneous network geometry on network performance." ] }
1505.00076
1566257247
A recent approach in modeling and analysis of the supply and demand in heterogeneous wireless cellular networks has been the use of two independent Poisson point processes (PPPs) for the locations of base stations (BSs) and user equipments (UEs). This popular approach has two major shortcomings. First, although the PPP model may be a fitting one for the BS locations, it is less adequate for the UE locations mainly due to the fact that the model is not adjustable (tunable) to represent the severity of the heterogeneity (non-uniformity) in the UE locations. Besides, the independence assumption between the two PPPs does not capture the often-observed correlation between the UE and BS locations. This paper presents a novel heterogeneous spatial traffic modeling which allows statistical adjustment. Simple and non-parameterized, yet sufficiently accurate, measures for capturing the traffic characteristics in space are introduced. Only two statistical parameters related to the UE distribution, namely, the coefficient of variation (the normalized second-moment), of an appropriately defined inter-UE distance measure, and correlation coefficient (the normalized cross-moment) between UE and BS locations, are adjusted to control the degree of heterogeneity and the bias towards the BS locations, respectively. This model is used in heterogeneous wireless cellular networks (HetNets) to demonstrate the impact of heterogeneous and BS-correlated traffic on the network performance. This network is called HetHetNet since it has two types of heterogeneity: heterogeneity in the infrastructure (supply), and heterogeneity in the spatial traffic distribution (demand).
In this paper, we propose a superior traffic generation method in comparison to that in @cite_8 which is also applicable to HetNet scenarios and study the impact of spatially heterogeneous traffic on heterogeneous infrastructure. The proposed method is computationally efficient since it does not require the generation of a density function. Some of the key results in our previous works @cite_13 and @cite_8 are also presented in this paper in a comprehensive and coherent way with more analytical detail.
{ "cite_N": [ "@cite_13", "@cite_8" ], "mid": [ "2094331586", "1966517505" ], "abstract": [ "Understanding and solving performance-related issues of current and future (5G+) networks requires the availability of realistic, yet simple and manageable, traffic models which capture and regenerate various properties of real traffic with sufficient accuracy and minimum number of parameters. Traffic in wireless cellular networks must be modeled in the space domain as well as the time domain. Modeling traffic in the time domain has been investigated well. However, for modeling the User Equipment (UE) distribution in the space domain, either the unrealistic uniform Poisson model, or some non-adjustable model, or specifc data from operators, is commonly used. In this paper, stochastic geometry is used to explain the similarities of traffic modeling in the time domain and the space domain. It is shown that traffic modeling in the time domain is a special one-dimensional case of traffic modeling in the space domain. Unified and non-parameterized metrics for characterizing the heterogeneity of traffic in the time domain and the space domain are proposed and their equivalence to the inter-arrival time, a well accepted metric in the time domain, is demonstrated. Coefficient of Variation (CoV), the normalized second-order statistic, is suggested as an appropriate statistical property of traffic to be measured. Simulation results show that the proposed metrics capture the properties of traffic more accurately than the existing metrics. Finally, the performance of LTE networks under modeled traffic using the new metrics is illustrated.", "Future generation (5G and beyond) cellular networks have to deal not only with an extreme traffic demand increase, but also an extreme level of heterogeneity in the distribution of that demand in both space and time. Traffic modeling in the time domain has been investigated well in the literature. In the space domain, however, there is a lack of statistical models for the heterogeneous User Equipment (UE) distribution beyond the classical Poisson Point Process (PPP) model. In this paper, we introduce a methodology for the generation and analysis of spatial traffic which allows statistical adjustments. Only two parameters, namely, Coefficient of Variation (CoV) and Correlation Coefficient, are adjusted to control the UE distribution heterogeneity and correlation with Base Stations (BSs). The methodology is applied to cellular networks to show the impact of heterogeneous network geometry on network performance." ] }
1505.00297
765524997
In a pursuit-evasion game, a team of pursuers attempt to capture an evader. The players alternate turns, move with equal speed, and have full information about the state of the game. We consider the most restictive capture condition: a pursuer must become colocated with the evader to win the game. We prove two general results about pursuit-evasion games in topological spaces. First, we show that one pursuer has a winning strategy in any CAT(0) space under this restrictive capture criterion. This complements a result of Alexander, Bishop and Ghrist, who provide a winning strategy for a game with positive capture radius. Second, we consider the game played in a compact domain in Euclidean two-space with piecewise analytic boundary and arbitrary Euler characteristic. We show that three pursuers always have a winning strategy by extending recent work of Bhadauria, Klein, Isler and Suri from polygonal environments to our more general setting.
Pursuit-evasion games have enjoyed a long research history. In the 1930s, Rado posed the Lion and Man game in which a lion hunts the man in a circular arena. The players move with equal speeds, and the lion wins if it achieves colocation. At first blush, it seems that lion should be able to capture man, regardless of the man's evasive strategy. However, Besicovitch showed that when the game is played in continuous time, the man can follow a spiraling path so that lion can get arbitrarily close, but cannot achieve colocation @cite_7 . However, when lion and man move in discrete time steps, our intuition prevails: lion does have a winning strategy @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_7" ], "mid": [ "2062468924", "2022967387" ], "abstract": [ "A pursue-and-evasion game is analyzed, including almost optimal bounds on the number of moves needed to win.", "Frontispiece Preface Foreword 1. Introduction to A Mathematician's Miscellany 2. Mathematics with minimum 'Raw Material' 3. From the mathematical tripos 4. Cross-purposes, unconscious assumptions, howlers, misprints, etc 5. The zoo 6. Ballistics 7. The dilemma of probability theory 8. From Fermat's last theorem to the abolition of capital punishment 9. A mathematical education 10. Review of Ramanujan's collected papers 11. Large numbers 12. Lion and man 13. People 14. Academic life 15. Odds and ends 16. Newton and the attraction of a sphere 17. The discovery of Neptune 18. The Adams-Airy affair 19. The mathematician's art of work." ] }
1505.00297
765524997
In a pursuit-evasion game, a team of pursuers attempt to capture an evader. The players alternate turns, move with equal speed, and have full information about the state of the game. We consider the most restictive capture condition: a pursuer must become colocated with the evader to win the game. We prove two general results about pursuit-evasion games in topological spaces. First, we show that one pursuer has a winning strategy in any CAT(0) space under this restrictive capture criterion. This complements a result of Alexander, Bishop and Ghrist, who provide a winning strategy for a game with positive capture radius. Second, we consider the game played in a compact domain in Euclidean two-space with piecewise analytic boundary and arbitrary Euler characteristic. We show that three pursuers always have a winning strategy by extending recent work of Bhadauria, Klein, Isler and Suri from polygonal environments to our more general setting.
The interdisciplinary literature on pursuit-evasion games spans a range of settings and variations. Pursuit games have been studied in many environments, including graphs, in polygonal environments and in topological spaces. Researchers have considered motion constraints such as speed differentials between the players, constraints on acceleration, and energy budgets. As for sensing models, the players may have full information about the positions of other players, or they may have incomplete or imperfect information. Typically, the capture condition requires achieving colocation, a proximity threshold, or sensory visibility (such as a non-obstructed view of the evader). For an overview of pursuit-evasion on graphs, see the monograph @cite_12 . The papers @cite_2 and @cite_22 provide a nice introduction to pursuit in the polygonal setting.
{ "cite_N": [ "@cite_22", "@cite_12", "@cite_2" ], "mid": [ "2066580552", "280278083", "40650588" ], "abstract": [ "We present a framework for solving pursuit evasion games in Rn for the case of N pursuers and a single evader. We give two algorithms that capture the evader in a number of steps linear in the original pursuer-evader distances. We also show how to generalize our results to a convex playing field with finitely many hyperplane boundaries that serve as obstacles.", "This book is the first and only one of its kind on the topic of Cops and Robbers games, and more generally, on the field of vertex pursuit games on graphs. The book is written in a lively and highly readable fashion, which should appeal to both senior undergraduates and experts in the field (and everyone in between). One of the main goals of the book is to bring together the key results in the field; as such, it presents structural, probabilistic, and algorithmic results on Cops and Robbers games. Several recent and new results are discussed, along with a comprehensive set of references. The book is suitable for self-study or as a textbook, owing in part to the over 200 exercises. The reader will gain insight into all the main directions of research in the field and will be exposed to a number of open problems.", "This paper surveys recent results in pursuit-evasion and autonomous search relevant to applications in mobile robotics. We provide a taxonomy of search problems that highlights the differences resulting from varying assumptions on the searchers, targets, and the environment. We then list a number of fundamental results in the areas of pursuit-evasion and probabilistic search, and we discuss field implementations on mobile robotic systems. In addition, we highlight current open problems in the area and explore avenues for future work." ] }
1505.00297
765524997
In a pursuit-evasion game, a team of pursuers attempt to capture an evader. The players alternate turns, move with equal speed, and have full information about the state of the game. We consider the most restictive capture condition: a pursuer must become colocated with the evader to win the game. We prove two general results about pursuit-evasion games in topological spaces. First, we show that one pursuer has a winning strategy in any CAT(0) space under this restrictive capture criterion. This complements a result of Alexander, Bishop and Ghrist, who provide a winning strategy for a game with positive capture radius. Second, we consider the game played in a compact domain in Euclidean two-space with piecewise analytic boundary and arbitrary Euler characteristic. We show that three pursuers always have a winning strategy by extending recent work of Bhadauria, Klein, Isler and Suri from polygonal environments to our more general setting.
The classic paper of Aigner and Fromme @cite_1 initiated the study of multiple pursuers versus a single evader on a graph. In this turn-based game, agents can move to adjacent vertices, and the cops win if one of them becomes co-located with the robber. This paper introduced the cop number of a graph, which is the minimum number of pursuers (cops) needed to catch the evader (robber). Aigner and Fromme proved that the cop number of any planar graph is at most 3. This bound is tight, as the dodecahedron graph requires three cops. At a high level, their winning pursuer strategy proceeds as follows. Two cops guard distinct @math -paths where @math are vertices of the graph @math . This restricts the pursuer movement to a subgraph of @math . The third pursuer then guards another @math -path, chosen so that (1) the robber's movement is further restricted, and (2) one of the other cops no longer needs to guard its path. This frees up that cop to continue the pursuit. This process repeats until the evader is caught.
{ "cite_N": [ "@cite_1" ], "mid": [ "2046635105" ], "abstract": [ "Let G be a finite connected graph. Two players, called cop C and robber R, play a game on G according to the following rules. First C then R occupy some vertex of G. After that they move alternately along edges of G. The cop C wins if he succeeds in putting himself on top of the robber R, otherwise R wins. We review an algorithmic characterization and structural description due to Nowakowski and Winkler. Then we consider the general situation where n cops chase the robber. It is shown that there are graphs on which arbitrarily many cops are needed to catch the robber. In contrast to this result, we prove that for planar graphs 3 cops always suffice to win." ] }
1505.00297
765524997
In a pursuit-evasion game, a team of pursuers attempt to capture an evader. The players alternate turns, move with equal speed, and have full information about the state of the game. We consider the most restictive capture condition: a pursuer must become colocated with the evader to win the game. We prove two general results about pursuit-evasion games in topological spaces. First, we show that one pursuer has a winning strategy in any CAT(0) space under this restrictive capture criterion. This complements a result of Alexander, Bishop and Ghrist, who provide a winning strategy for a game with positive capture radius. Second, we consider the game played in a compact domain in Euclidean two-space with piecewise analytic boundary and arbitrary Euler characteristic. We show that three pursuers always have a winning strategy by extending recent work of Bhadauria, Klein, Isler and Suri from polygonal environments to our more general setting.
More recently, an analogous result was proven by Bhadauria, Klein, Isler and Suri @cite_9 for pursuit-evasion games in a two-dimensional polygonal environment with polygonal holes. In this turn-based game, an agent can move to any point within unit distance of its current location. Like Aigner and Fromme, they use colocation as their capture criterion. prove that three pursuers are sufficient for pursuit-evasion in this setting, and that this bound is tight. The pursuer strategy is inspired by the Aigner and Fromme strategy for planar graphs: two pursuers guard paths that confine the evader while the third pursuer takes control of another path that further restricts the evader's movement. Of course, the details of the pursuit and the technical proofs are quite different from the graph case. Their proofs make heavy use of the polygonal nature of the environment, both to find the paths to guard and to guarantee that their pursuit finishes in finite time.
{ "cite_N": [ "@cite_9" ], "mid": [ "2029756211" ], "abstract": [ "Suppose an unpredictable evader is free to move around in a polygonal environment of arbitrary complexity that is under full camera surveillance. How many pursuers, each with the same maximum speed as the evader, are necessary and sufficient to guarantee a successful capture of the evader? The pursuers always know the evader's current position through a camera network, but need to physically reach the evader to capture it. We allow the evader knowledge of the current positions of all the pursuers as well-this accords with the standard worst-case analysis model, but also models a practical situation where the evader has 'hacked' into the surveillance system. Our main result is to prove that three pursuers are always sufficient and sometimes necessary to capture the evader. The bound is independent of the number of vertices or holes in the polygonal environment." ] }
1505.00297
765524997
In a pursuit-evasion game, a team of pursuers attempt to capture an evader. The players alternate turns, move with equal speed, and have full information about the state of the game. We consider the most restictive capture condition: a pursuer must become colocated with the evader to win the game. We prove two general results about pursuit-evasion games in topological spaces. First, we show that one pursuer has a winning strategy in any CAT(0) space under this restrictive capture criterion. This complements a result of Alexander, Bishop and Ghrist, who provide a winning strategy for a game with positive capture radius. Second, we consider the game played in a compact domain in Euclidean two-space with piecewise analytic boundary and arbitrary Euler characteristic. We show that three pursuers always have a winning strategy by extending recent work of Bhadauria, Klein, Isler and Suri from polygonal environments to our more general setting.
Just as the proofs of were inspired by Aigner and Fromme, our proof of Theorem is inspired by those for the polygonal environment. actually give two different winning strategies for three pursuers. At a high level, these strategies progress in the same way, but the tactics for choosing paths and how to guard them are different. Herein, we adapt their shortest path strategy to our more general setting. The topological environment introduces a distinctive set of challenges to overcome. In particular, we do not have a finite set of polygonal vertices to use as a backbone for our guarded paths. Instead, we rely on homotopy classes to differentiate between paths to guard. Looking beyond the high-level structure of our pursuer strategy, the arguments (and their technical details) in this paper are wholly distinct from those found in @cite_9 , and our result applies to a much broader class of environments.
{ "cite_N": [ "@cite_9" ], "mid": [ "2029756211" ], "abstract": [ "Suppose an unpredictable evader is free to move around in a polygonal environment of arbitrary complexity that is under full camera surveillance. How many pursuers, each with the same maximum speed as the evader, are necessary and sufficient to guarantee a successful capture of the evader? The pursuers always know the evader's current position through a camera network, but need to physically reach the evader to capture it. We allow the evader knowledge of the current positions of all the pursuers as well-this accords with the standard worst-case analysis model, but also models a practical situation where the evader has 'hacked' into the surveillance system. Our main result is to prove that three pursuers are always sufficient and sometimes necessary to capture the evader. The bound is independent of the number of vertices or holes in the polygonal environment." ] }
1505.00113
2135958048
The @math 'th frequency moment of a sequence of integers is defined as @math , where @math is the number of times that @math occurs in the sequence. Here we study the quantum complexity of approximately computing the frequency moments in two settings. In the query complexity setting, we wish to minimise the number of queries to the input used to approximate @math up to relative error @math . We give quantum algorithms which outperform the best possible classical algorithms up to quadratically. In the multiple-pass streaming setting, we see the elements of the input one at a time, and seek to minimise the amount of storage space, or passes over the data, used to approximate @math . We describe quantum algorithms for @math , @math and @math in this model which outperform the best possible classical algorithms almost quadratically.
( @math ) Flajolet and Martin gave a single-pass streaming algorithm which uses @math bits of space and computes @math up to a constant factor @cite_15 . Alon, Matias and Szegedy improved this by replacing the randomness used in the Flajolet-Martin algorithm with a family of simple hash functions @cite_35 . Bar- gave several different algorithms for approximating @math up to a @math factor, using as little as @math space @cite_42 . Kane, Nelson and Woodruff have now completed this line of research by giving a single-pass streaming algorithm which approximates @math using @math space @cite_17 . This is optimal for single-pass streaming algorithms; a space lower bound of @math was shown by Alon, Matias and Szegedy @cite_35 , and a lower bound of @math was shown by Woodruff @cite_37 . This was generalised to an @math lower bound for @math -pass streaming algorithms by Chakrabarti and Regev @cite_39 .
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_42", "@cite_39", "@cite_15", "@cite_17" ], "mid": [ "2064379477", "2160681854", "1785933978", "1985526537", "2025051251", "2103126020" ], "abstract": [ "The frequency moments of a sequence containing mi elements of type i, for 1 i n, are the numbers Fk = P n=1 m k . We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbers F0;F1 and F2 can be approximated in logarithmic space, whereas the approximation of Fk for k 6 requires n (1) space. Applications to data bases are mentioned as well.", "We prove that any one-pass streaming algorithm which (e, Δ)-approximates the kth frequency moment F k , for any real k ≠ 1 and any e = Ω(1 √m), must use Ω(1 e²) bits of space, where m is the size of the universe. This is optimal in terms of e, resolves the open questions of Bar- in [3, 4], and extends the Ω(1 e²) lower bound for F 0 in [11] to much smaller e by applying novel techniques. Along the way we lower bound the one-way communication complexity of approximating the Hamming distance and the number of bipartite graphs with minimum maximum degree constraints.", "We present three algorithms to count the number of distinct elements in a data stream to within a factor of 1 ± ?. Our algorithms improve upon known algorithms for this problem, and offer a spectrum of time space tradeoffs.", "We prove an optimal @math lower bound on the randomized communication complexity of the much-studied gap-hamming-distance problem. As a consequence, we obtain essentially optimal multipass space lower bounds in the data stream model for a number of fundamental problems, including the estimation of frequency moments. The gap-hamming-distance problem is a communication problem, wherein Alice and Bob receive @math -bit strings @math and @math , respectively. They are promised that the Hamming distance between @math and @math is either at least @math or at most @math , and their goal is to decide which of these is the case. Since the formal presentation of the problem by Indyk and Woodruff [Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, 2003, pp. 283--289], it had been conjectured that the naive protocol, which uses @math bits of communication, is asymptotically optimal. The conjecture was shown to be true in several special cases, e.g., when the communication is de...", "This paper introduces a class of probabilistic counting algorithms with which one can estimate the number of distinct elements in a large collection of data (typically a large file stored on disk) in a single pass using only a small additional storage (typically less than a hundred binary words) and only a few operations per element scanned. The algorithms are based on statistical observations made on bits of hashed values of records. They are by construction totally insensitive to the replicative structure of elements in the file; they can be used in the context of distributed systems without any degradation of performances and prove especially useful in the context of data bases query optimisation.", "We give the first optimal algorithm for estimating the number of distinct elements in a data stream, closing a long line of theoretical research on this problem begun by Flajolet and Martin in their seminal paper in FOCS 1983. This problem has applications to query optimization, Internet routing, network topology, and data mining. For a stream of indices in 1,...,n , our algorithm computes a (1 ± e)-approximation using an optimal O(1 e-2 + log(n)) bits of space with 2 3 success probability, where 0 We also give an algorithm to estimate the Hamming norm of a stream, a generalization of the number of distinct elements, which is useful in data cleaning, packet tracing, and database auditing. Our algorithm uses nearly optimal space, and has optimal O(1) update and reporting times." ] }
1505.00113
2135958048
The @math 'th frequency moment of a sequence of integers is defined as @math , where @math is the number of times that @math occurs in the sequence. Here we study the quantum complexity of approximately computing the frequency moments in two settings. In the query complexity setting, we wish to minimise the number of queries to the input used to approximate @math up to relative error @math . We give quantum algorithms which outperform the best possible classical algorithms up to quadratically. In the multiple-pass streaming setting, we see the elements of the input one at a time, and seek to minimise the amount of storage space, or passes over the data, used to approximate @math . We describe quantum algorithms for @math , @math and @math in this model which outperform the best possible classical algorithms almost quadratically.
( @math ) Alon, Matias and Szegedy gave an @math single-pass streaming algorithm @cite_35 , and also showed an @math lower bound. An @math lower bound for single-pass streaming algorithms was proven by Woodruff @cite_37 , which was similarly extended to an @math lower bound for @math -pass streaming algorithms by Chakrabarti and Regev @cite_39 .
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_39" ], "mid": [ "2064379477", "2160681854", "1985526537" ], "abstract": [ "The frequency moments of a sequence containing mi elements of type i, for 1 i n, are the numbers Fk = P n=1 m k . We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbers F0;F1 and F2 can be approximated in logarithmic space, whereas the approximation of Fk for k 6 requires n (1) space. Applications to data bases are mentioned as well.", "We prove that any one-pass streaming algorithm which (e, Δ)-approximates the kth frequency moment F k , for any real k ≠ 1 and any e = Ω(1 √m), must use Ω(1 e²) bits of space, where m is the size of the universe. This is optimal in terms of e, resolves the open questions of Bar- in [3, 4], and extends the Ω(1 e²) lower bound for F 0 in [11] to much smaller e by applying novel techniques. Along the way we lower bound the one-way communication complexity of approximating the Hamming distance and the number of bipartite graphs with minimum maximum degree constraints.", "We prove an optimal @math lower bound on the randomized communication complexity of the much-studied gap-hamming-distance problem. As a consequence, we obtain essentially optimal multipass space lower bounds in the data stream model for a number of fundamental problems, including the estimation of frequency moments. The gap-hamming-distance problem is a communication problem, wherein Alice and Bob receive @math -bit strings @math and @math , respectively. They are promised that the Hamming distance between @math and @math is either at least @math or at most @math , and their goal is to decide which of these is the case. Since the formal presentation of the problem by Indyk and Woodruff [Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, 2003, pp. 283--289], it had been conjectured that the naive protocol, which uses @math bits of communication, is asymptotically optimal. The conjecture was shown to be true in several special cases, e.g., when the communication is de..." ] }
1505.00113
2135958048
The @math 'th frequency moment of a sequence of integers is defined as @math , where @math is the number of times that @math occurs in the sequence. Here we study the quantum complexity of approximately computing the frequency moments in two settings. In the query complexity setting, we wish to minimise the number of queries to the input used to approximate @math up to relative error @math . We give quantum algorithms which outperform the best possible classical algorithms up to quadratically. In the multiple-pass streaming setting, we see the elements of the input one at a time, and seek to minimise the amount of storage space, or passes over the data, used to approximate @math . We describe quantum algorithms for @math , @math and @math in this model which outperform the best possible classical algorithms almost quadratically.
( @math , @math ) Alon, Matias and Szegedy gave single-pass streaming algorithms using space @math @cite_35 . An almost-optimal @math algorithm was later given by Indyk and Woodruff for any @math @cite_11 . This was simplified by , who also improved the dependence on @math @cite_22 . Very recently, gave an @math algorithm for @math and @math @cite_18 . This effectively matches the tightest known general space lower bound on @math -pass streaming algorithms, @math shown by Woodruff and Zhang @cite_32 .
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_22", "@cite_32", "@cite_11" ], "mid": [ "2064379477", "", "2075567379", "2950318219", "2069414131" ], "abstract": [ "The frequency moments of a sequence containing mi elements of type i, for 1 i n, are the numbers Fk = P n=1 m k . We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbers F0;F1 and F2 can be approximated in logarithmic space, whereas the approximation of Fk for k 6 requires n (1) space. Applications to data bases are mentioned as well.", "", "The problem of estimating the kth frequency moment F k over a data stream by looking at the items exactly once as they arrive was posed in [1, 2]. A succession of algorithms have been proposed for this problem [1, 2, 6, 8, 7]. Recently, Indyk and Woodruff [11] have presented the first algorithm for estimating F k , for k > 2, using space O(n1-2 k), matching the space lower bound (up to poly-logarithmic factors) for this problem [1, 2, 3, 4, 13] (n is the number of distinct items occurring in the stream.) In this paper, we present a simpler 1-pass algorithm for estimating F k.", "We resolve several fundamental questions in the area of distributed functional monitoring, initiated by Cormode, Muthukrishnan, and Yi (SODA, 2008). In this model there are @math sites each tracking their input and communicating with a central coordinator that continuously maintain an approximate output to a function @math computed over the union of the inputs. The goal is to minimize the communication. We show the randomized communication complexity of estimating the number of distinct elements up to a @math factor is @math , improving the previous @math bound and matching known upper bounds up to a logarithmic factor. For the @math -th frequency moment @math , @math , we improve the previous @math communication bound to @math . We obtain similar improvements for heavy hitters, empirical entropy, and other problems. We also show that we can estimate @math , for any @math , using @math communication. This greatly improves upon the previous @math bound of Cormode, Muthukrishnan, and Yi for general @math , and their @math bound for @math . For @math , our bound resolves their main open question. Our lower bounds are based on new direct sum theorems for approximate majority, and yield significant improvements to problems in the data stream model, improving the bound for estimating @math in @math passes from @math to @math , giving the first bound for estimating @math in @math passes of @math bits of space that does not use the gap-hamming problem.", "We give a 1-pass O(m1-2⁄k)-space algorithm for computing the k-th frequency moment of a data stream for any real k > 2. Together with the lower bounds of [1, 2, 4], this resolves the main problem left open by in 1996 [1]. Our algorithm also works for streams with deletions and thus gives an O(m 1-2⁄p) space algorithm for the L p difference problem for any p > 2. This essentially matches the known Ω(m1-2⁄p-o(1)) lower bound of [12, 2]. Finally the update time of our algorithms is O(1)." ] }
1505.00113
2135958048
The @math 'th frequency moment of a sequence of integers is defined as @math , where @math is the number of times that @math occurs in the sequence. Here we study the quantum complexity of approximately computing the frequency moments in two settings. In the query complexity setting, we wish to minimise the number of queries to the input used to approximate @math up to relative error @math . We give quantum algorithms which outperform the best possible classical algorithms up to quadratically. In the multiple-pass streaming setting, we see the elements of the input one at a time, and seek to minimise the amount of storage space, or passes over the data, used to approximate @math . We describe quantum algorithms for @math , @math and @math in this model which outperform the best possible classical algorithms almost quadratically.
( @math ) Alon, Matias and Szegedy showed an @math space lower bound @cite_35 , even for multiple-pass streaming algorithms with constant @math , by a reduction from the communication complexity of Disjointness. Near-optimal time-space tradeoffs for the related problem of exactly computing frequency moments over sliding windows were proven by Beame, Clifford and Machmouchi @cite_34 .
{ "cite_N": [ "@cite_35", "@cite_34" ], "mid": [ "2064379477", "2952056844" ], "abstract": [ "The frequency moments of a sequence containing mi elements of type i, for 1 i n, are the numbers Fk = P n=1 m k . We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbers F0;F1 and F2 can be approximated in logarithmic space, whereas the approximation of Fk for k 6 requires n (1) space. Applications to data bases are mentioned as well.", "We derive new time-space tradeoff lower bounds and algorithms for exactly computing statistics of input data, including frequency moments, element distinctness, and order statistics, that are simple to calculate for sorted data. We develop a randomized algorithm for the element distinctness problem whose time T and space S satisfy T in O (n^ 3 2 S^ 1 2 ), smaller than previous lower bounds for comparison-based algorithms, showing that element distinctness is strictly easier than sorting for randomized branching programs. This algorithm is based on a new time and space efficient algorithm for finding all collisions of a function f from a finite set to itself that are reachable by iterating f from a given set of starting points. We further show that our element distinctness algorithm can be extended at only a polylogarithmic factor cost to solve the element distinctness problem over sliding windows, where the task is to take an input of length 2n-1 and produce an output for each window of length n, giving n outputs in total. In contrast, we show a time-space tradeoff lower bound of T in Omega(n^2 S) for randomized branching programs to compute the number of distinct elements over sliding windows. The same lower bound holds for computing the low-order bit of F_0 and computing any frequency moment F_k, k neq 1. This shows that those frequency moments and the decision problem F_0 mod 2 are strictly harder than element distinctness. We complement this lower bound with a T in O(n^2 S) comparison-based deterministic RAM algorithm for exactly computing F_k over sliding windows, nearly matching both our lower bound for the sliding-window version and the comparison-based lower bounds for the single-window version. We further exhibit a quantum algorithm for F_0 over sliding windows with T in O(n^ 3 2 S^ 1 2 ). Finally, we consider the computations of order statistics over sliding windows." ] }
1505.00113
2135958048
The @math 'th frequency moment of a sequence of integers is defined as @math , where @math is the number of times that @math occurs in the sequence. Here we study the quantum complexity of approximately computing the frequency moments in two settings. In the query complexity setting, we wish to minimise the number of queries to the input used to approximate @math up to relative error @math . We give quantum algorithms which outperform the best possible classical algorithms up to quadratically. In the multiple-pass streaming setting, we see the elements of the input one at a time, and seek to minimise the amount of storage space, or passes over the data, used to approximate @math . We describe quantum algorithms for @math , @math and @math in this model which outperform the best possible classical algorithms almost quadratically.
The classical query complexity of approximating the frequency moments has also been studied, under the name of sample complexity. @cite_29 gave a lower bound of @math queries for approximating @math . For any @math , Bar-Yossef @cite_41 showed a lower bound of @math , and a nearly matching upper bound (for @math , @math ) of @math . Very recently, the lower bound has been improved to a tight @math by @cite_13 .
{ "cite_N": [ "@cite_41", "@cite_29", "@cite_13" ], "mid": [ "69989483", "2058991275", "2205170224" ], "abstract": [ "Numerous massive data sets, ranging from flows of Internet traffic to logs of supermarket transactions, have emerged during the past few years. Their overwhelming size and the typically restricted access to them call for new computational models. This thesis studies three such models: sampling computations, data stream computations, and sketch computations. While most of the previous work focused on designing algorithms in the new models, this thesis revolves around the limitations of the models. We develop a suite of lower bound techniques that characterize the complexity of functions in these models, indicating which problems can be solved efficiently in them. We derive specific bounds for a multitude of practical problems, arising from applications in database, networking, and information retrieval, such as frequency statistics, selection functions, statistical moments, and distance estimation. We present general, powerful, and easy to use lower bound techniques for the sampling model. The techniques apply to all functions and address both oblivious and adaptive sampling. They frequently produce optimal bounds for a wide range of functions. They are stated in terms of new combinatorial and statistical properties of functions, which are easy to calculate. We obtain lower bounds for the data stream and sketch models through one-way and simultaneous communication complexity. We develop lower bounds for the latter via a new information-theoretic view of communication complexity. A highlight of this work is an optimal simultaneous communication complexity lower bound for the important multi-party set-disjointness problem. Finally, we present a powerful method for proving lower bounds for general communication complexity. The method is based on a direct sum property of a new measure of complexity for communication complexity protocols and on a novel statistical view of communication complexity. We use the technique to obtain improved communication complexity and data stream lower bounds for several problems, including multi-party set-disjointness, frequency moments, and Lp distance estimation. These results solve open problems of Alon, Matias, and Szegedy and of Saks and Sun.", "We consider the problem of estimating the number of distinct values in a column of a table. For large tables without an index on the column, random sampling appears to be the only scalable approach for estimating the number of distinct values. We establish a powerful negative result stating that no estimator can guarantee small error across all input distributions, unless it examines a large fraction of the input data. In fact, any estimator must incur a significant error on at least some of a natural class of distributions. We then provide a new estimator which is provably optimal, in that its error is guaranteed to essentially match our negative result. A drawback of this estimator is that while its worst-case error is reasonable, it does not necessarily give the best possible error bound on any given distribution. Therefore, we develop heuristic estimators that are optimized for a class of typical input distributions. While these estimators lack strong guarantees on distribution-independent worst-case error, our extensive empirical comparison indicate their effectiveness both on real data sets and on synthetic data sets.", "It was recently shown that estimating the Shannon entropy @math of a discrete @math -symbol distribution @math requires @math samples, a number that grows near-linearly in the support size. In many applications @math can be replaced by the more general R 'enyi entropy of order @math , @math . We determine the number of samples needed to estimate @math for all @math , showing that @math requires a near-linear @math samples, but, perhaps surprisingly, integer @math requires only @math samples. Furthermore, developing on a recently established connection between polynomial approximation and estimation of additive functions of the form @math , we reduce the sample complexity for noninteger values of @math by a factor of @math compared to the empirical estimator. The estimators achieving these bounds are simple and run in time linear in the number of samples. Our lower bounds provide explicit constructions of distributions with different R 'enyi entropies that are hard to distinguish." ] }
1505.00113
2135958048
The @math 'th frequency moment of a sequence of integers is defined as @math , where @math is the number of times that @math occurs in the sequence. Here we study the quantum complexity of approximately computing the frequency moments in two settings. In the query complexity setting, we wish to minimise the number of queries to the input used to approximate @math up to relative error @math . We give quantum algorithms which outperform the best possible classical algorithms up to quadratically. In the multiple-pass streaming setting, we see the elements of the input one at a time, and seek to minimise the amount of storage space, or passes over the data, used to approximate @math . We describe quantum algorithms for @math , @math and @math in this model which outperform the best possible classical algorithms almost quadratically.
Very recent independent work of @cite_7 has considered a related problem to approximating @math : testing the image size of a function. The quantum algorithm of @cite_7 is based in the setting of property testing, and has subtly different parameters to the algorithm for @math presented here. Given oracle access to a function @math , their algorithm distinguishes between two cases: a) the image of @math is of size at most @math ; b) an @math fraction of the output values of @math need to be changed to reduce its image to size at most @math . The algorithm uses @math quantum queries.
{ "cite_N": [ "@cite_7" ], "mid": [ "1465613978" ], "abstract": [ "In the k-junta testing problem, a tester has to efficiently decide whether a given function f: 0, 1 n → 0, 1 is a k-junta (i.e., depends on at most fc of its input bits) or is e-far from any k-junta. Our main result is a quantum algorithm for this problem with query complexity O([EQUATION]) and time complexity O(n[EQUATION]). This quadratically improves over the query complexity of the previous best quantum junta tester, due to Atici and Servedio. Our tester is based on a new quantum algorithm for a gapped version of the combinatorial group testing problem, with an up to quartic improvement over the query complexity of the best classical algorithm. For our upper bound on the time complexity we give a near-linear time implementation of a shallow variant of the quantum Fourier transform over the symmetric group, similar to the Schur-Weyl transform. We also prove a lower bound of Ω(k1 3) queries for junta-testing (for constant e)." ] }
1505.00161
2950199579
Learning representations for semantic relations is important for various tasks such as analogy detection, relational search, and relation classification. Although there have been several proposals for learning representations for individual words, learning word representations that explicitly capture the semantic relations between words remains under developed. We propose an unsupervised method for learning vector representations for words such that the learnt representations are sensitive to the semantic relations that exist between two words. First, we extract lexical patterns from the co-occurrence contexts of two words in a corpus to represent the semantic relations that exist between those two words. Second, we represent a lexical pattern as the weighted sum of the representations of the words that co-occur with that lexical pattern. Third, we train a binary classifier to detect relationally similar vs. non-similar lexical pattern pairs. The proposed method is unsupervised in the sense that the lexical pattern pairs we use as train data are automatically sampled from a corpus, without requiring any manual intervention. Our proposed method statistically significantly outperforms the current state-of-the-art word representations on three benchmark datasets for proportional analogy detection, demonstrating its ability to accurately capture the semantic relations among words.
Representing words using vectors (or tensors in general) is an essential task in text processing. For example, in distributional semantics @cite_20 , a word @math is represented by a vector that contains other words that co-occur with @math in a corpus. Numerous methods for selecting co-occurrence contexts (e.g. proximity-based windows, dependency relations), and word association measures (e.g. pointwise mutual information (PMI), log-likelihood ratio (LLR), local mutual information (LLR)) have been proposed @cite_3 . Despite the successful applications of co-occurrence counting-based distributional word representations, their high dimensionality and sparsity is often problematic when applied in NLP tasks. Consequently, further post-processing such as dimensionality reduction, and feature selection is often required when using distributional word representations.
{ "cite_N": [ "@cite_3", "@cite_20" ], "mid": [ "1662133657", "2128870637" ], "abstract": [ "Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.", "Research into corpus-based semantics has focused on the development of ad hoc models that treat single tasks, or sets of closely related tasks, as unrelated challenges to be tackled by extracting different kinds of distributional information from the corpus. As an alternative to this \"one task, one model\" approach, the Distributional Memory framework extracts distributional information once and for all from the corpus, in the form of a set of weighted word-link-word tuples arranged into a third-order tensor. Different matrices are then generated from the tensor, and their rows and columns constitute natural spaces to deal with different semantic problems. In this way, the same distributional information can be shared across tasks such as modeling word similarity judgments, discovering synonyms, concept categorization, predicting selectional preferences of verbs, solving analogy problems, classifying relations between word pairs, harvesting qualia structures with patterns or example pairs, predicting the typical properties of concepts, and classifying verbs into alternation classes. Extensive empirical testing in all these domains shows that a Distributional Memory implementation performs competitively against task-specific algorithms recently reported in the literature for the same tasks, and against our implementations of several state-of-the-art methods. The Distributional Memory approach is thus shown to be tenable despite the constraints imposed by its multi-purpose nature." ] }
1505.00161
2950199579
Learning representations for semantic relations is important for various tasks such as analogy detection, relational search, and relation classification. Although there have been several proposals for learning representations for individual words, learning word representations that explicitly capture the semantic relations between words remains under developed. We propose an unsupervised method for learning vector representations for words such that the learnt representations are sensitive to the semantic relations that exist between two words. First, we extract lexical patterns from the co-occurrence contexts of two words in a corpus to represent the semantic relations that exist between those two words. Second, we represent a lexical pattern as the weighted sum of the representations of the words that co-occur with that lexical pattern. Third, we train a binary classifier to detect relationally similar vs. non-similar lexical pattern pairs. The proposed method is unsupervised in the sense that the lexical pattern pairs we use as train data are automatically sampled from a corpus, without requiring any manual intervention. Our proposed method statistically significantly outperforms the current state-of-the-art word representations on three benchmark datasets for proportional analogy detection, demonstrating its ability to accurately capture the semantic relations among words.
On the other hand, distributed word representation learning methods model words as @math -dimensional real vectors and learn those vector representations by applying them to solve an auxiliary task such as language modeling. The dimensionality @math is fixed for all the words in the vocabulary and, unlike distributional word representations, is much smaller (e.g. @math in practice) compared to the vocabulary size. A pioneering work on word representation learning is the neural network language model (NNLMs) @cite_22 , where word representations are learnt such that we can accurately predict the next word in a sentence using the word representations for the previous words. Using backpropagation, word vectors are updated such that the prediction error is minimized.
{ "cite_N": [ "@cite_22" ], "mid": [ "2132339004" ], "abstract": [ "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts." ] }
1504.08289
2949820118
Part models of object categories are essential for challenging recognition tasks, where differences in categories are subtle and only reflected in appearances of small parts of the object. We present an approach that is able to learn part models in a completely unsupervised manner, without part annotations and even without given bounding boxes during learning. The key idea is to find constellations of neural activation patterns computed using convolutional neural networks. In our experiments, we outperform existing approaches for fine-grained recognition on the CUB200-2011, NA birds, Oxford PETS, and Oxford Flowers dataset in case no part or bounding box annotations are available and achieve state-of-the-art performance for the Stanford Dog dataset. We also show the benefits of neural constellation models as a data augmentation technique for fine-tuning. Furthermore, our paper unites the areas of generic and fine-grained classification, since our approach is suitable for both scenarios. The source code of our method is available online at this http URL
The unsupervised scenario that we tackle has also been considered by Xiao al @cite_6 . They cluster the channels of the last convolutional layers of a CNN into groups. Patches for the object and each part are extracted based on the activation of each of these groups. The patches are used to classify the image. While their work requires a pre-trained classifier for the objects of interest, we only need a CNN that can be pre-trained on a weakly related object dataset.
{ "cite_N": [ "@cite_6" ], "mid": [ "1928906481" ], "abstract": [ "Fine-grained classification is challenging because categories can only be discriminated by subtle and local differences. Variances in the pose, scale or rotation usually make the problem more difficult. Most fine-grained classification systems follow the pipeline of finding foreground object or object parts (where) to extract discriminative features (what)." ] }
1504.08219
2953280125
To train good supervised and semi-supervised object classifiers, it is critical that we not waste the time of the human experts who are providing the training labels. Existing active learning strategies can have uneven performance, being efficient on some datasets but wasteful on others, or inconsistent just between runs on the same dataset. We propose perplexity based graph construction and a new hierarchical subquery evaluation algorithm to combat this variability, and to release the potential of Expected Error Reduction. Under some specific circumstances, Expected Error Reduction has been one of the strongest-performing informativeness criteria for active learning. Until now, it has also been prohibitively costly to compute for sizeable datasets. We demonstrate our highly practical algorithm, comparing it to other active learning measures on classification datasets that vary in sparsity, dimensionality, and size. Our algorithm is consistent over multiple runs and achieves high accuracy, while querying the human expert for labels at a frequency that matches their desired time budget.
To cope with larger datasets, different approaches have been proposed to reduce the number of subqueries that must be evaluated. Strategies include only considering a subsample of the full data @cite_26 , or using the inherent structure of the data to limit influence and selection of subqueries @cite_19 . Using the same manifold assumption as SSL, these methods cluster the data in its original feature space. Macskassy @cite_19 explores graph based metrics, commonly used in community detection, to identify cluster centers (each assumed to contain the same class) that are then evaluated using EER. This is related to the hierarchical clustering method for category discovery of Vatturi @cite_28 . However, by limiting subqueries to cluster centers, these clustering based approaches are unable to perform boundary refinement.
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_26" ], "mid": [ "2162303794", "2114900710", "1484084878" ], "abstract": [ "Many applications in surveillance, monitoring, scientific discovery, and data cleaning require the identification of anomalies. Although many methods have been developed to identify statistically significant anomalies, a more difficult task is to identify anomalies that are both interesting and statistically significant. Category detection is an emerging area of machine learning that can help address this issue using a \"human-in-the-loop\" approach. In this interactive setting, the algorithm asks the user to label a query data point under an existing category or declare the query data point to belong to a previously undiscovered category. The goal of category detection is to bring to the user's attention a representative data point from each category in the data in as few queries as possible. In a data set with imbalanced categories, the main challenge is in identifying the rare categories or anomalies; hence, the task is often referred to as rare category detection. We present a new approach to rare category detection based on hierarchical mean shift. In our approach, a hierarchy is created by repeatedly applying mean shift with an increasing bandwidth on the data. This hierarchy allows us to identify anomalies in the data set at different scales, which are then posed as queries to the user. The main advantage of this methodology over existing approaches is that it does not require any knowledge of the dataset properties such as the total number of categories or the prior probabilities of the categories. Results on real-world data sets show that our hierarchical mean shift approach performs consistently better than previous techniques.", "Active and semi-supervised learning are important techniques when labeled data are scarce. Recently a method was suggested for combining active learning with a semi-supervised learning algorithm that uses Gaussian fields and harmonic functions. This classifier is relational in nature: it relies on having the data presented as a partially labeled graph (also known as a within-network learning problem). This work showed yet again that empirical risk minimization (ERM) was the best method to find the next instance to label and provided an efficient way to compute ERM with the semi-supervised classifier. The computational problem with ERM is that it relies on computing the risk for all possible instances. If we could limit the candidates that should be investigated, then we can speed up active learning considerably. In the case where the data is graphical in nature, we can leverage the graph structure to rapidly identify instances that are likely to be good candidates for labeling. This paper describes a novel hybrid approach of using of community finding and social network analytic centrality measures to identify good candidates for labeling and then using ERM to find the best instance in this candidate set. We show on real-world data that we can limit the ERM computations to a fraction of instances with comparable performance.", "" ] }
1504.08219
2953280125
To train good supervised and semi-supervised object classifiers, it is critical that we not waste the time of the human experts who are providing the training labels. Existing active learning strategies can have uneven performance, being efficient on some datasets but wasteful on others, or inconsistent just between runs on the same dataset. We propose perplexity based graph construction and a new hierarchical subquery evaluation algorithm to combat this variability, and to release the potential of Expected Error Reduction. Under some specific circumstances, Expected Error Reduction has been one of the strongest-performing informativeness criteria for active learning. Until now, it has also been prohibitively costly to compute for sizeable datasets. We demonstrate our highly practical algorithm, comparing it to other active learning measures on classification datasets that vary in sparsity, dimensionality, and size. Our algorithm is consistent over multiple runs and achieves high accuracy, while querying the human expert for labels at a frequency that matches their desired time budget.
The hierarchical clustering, in @cite_38 , is used to define bounds on sampling statistics. Every one of their samples (a full query to the oracle) is randomly selected from a strict partition of a prespecified clustering (similar to a breadth first search) and only shares label information within its cluster. Our proposed method also uses a hierarchical representation, but differs as it uses the hierarchy for efficient sampling using EER, with the added advantages of graph based SSL, without sacrificing the ability to refine class boundaries.
{ "cite_N": [ "@cite_38" ], "mid": [ "2042932437" ], "abstract": [ "The invention is an improved apparatus for the removal of sodium carbonate from cyanide plating baths. The method involves the precipitation of sodium carbonate by the effect of cooling the cyanide plating bath liquid. A container is submerged in the solution with the container opening extending above the plating bath solution level. The container is filled with dry ice and water which produces a temperature of approximately zero degrees centigrade inside of the container. The solution adjacent to the exterior of the container is cooled so that excess sodium carbonate is precipitated as a crystalline deposit. After a desired interval, the container is removed with the encrustation of sodium carbonate for disposal." ] }
1504.07825
2039391021
Recent work has shown that adaptive CSMA algorithms can achieve throughput optimality. However, these adaptive CSMA algorithms assume a rather simplistic model for the wireless medium. Specifically, the interference is typically modelled by a conflict graph, and the channels are assumed to be static. In this work, we propose a distributed and adaptive CSMA algorithm under a more realistic signal-to-interference ratio (SIR) based interference model, with time-varying channels. We prove that our algorithm is throughput optimal under this generalized model. Further, we augment our proposed algorithm by using a parallel update technique. Numerical results show that our algorithm outperforms the conflict graph based algorithms, in terms of supportable throughput and the rate of convergence to steady-state.
- Instantaneous channel gains are assumed to be known at each time slot. - Channel statistics (such as average channel gains or distribution) are assumed to be known. A time-varying channel is considered between the transmitter of a link and its corresponding receiver in @cite_9 . However, the channel gains between the interfering links are assumed to be static. In @cite_12 , time varying channels are considered among all the links, and the interference is modelled by a conflict graph. However, the algorithm @cite_12 can support only a fraction of the achievable rate region. A SIR model is considered in @cite_6 to a propose conservative algorithm that is suboptimal. An adaptive Aloha based algorithm is proposed in @cite_14 under time-varying channels. However the algorithm can only maximize some utility functions and is not throughput optimal.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_6", "@cite_12" ], "mid": [ "2133301570", "", "2039533850", "2038187727" ], "abstract": [ "Recent studies on MAC scheduling have shown that carrier sense multiple access (CSMA) algorithms can be throughput optimal for arbitrary wireless network topology. However, these results are highly sensitive to the underlying assumption on static' or fixed' system conditions. For example, if channel conditions are time-varying, it is unclear how each node can adjust its CSMA parameters, so-called backoff and channel holding times, using its local channel information for the desired high performance. In this paper, we study channel-aware' CSMA (A-CSMA) algorithms in time-varying channels, where they adjust their parameters as some function of the current channel capacity. First, we show that the achievable rate region of A-CSMA equals to the maximum rate region if and only if the function is exponential. Furthermore, given an exponential function in A-CSMA, we design updating rules for their parameters, which achieve throughput optimality for an arbitrary wireless network topology. They are the first CSMA algorithms in the literature which are proved to be throughput optimal under time-varying channels. Moreover, we also consider the case when back-off rates of A-CSMA are highly restricted compared to the speed of channel variations, and characterize the throughput performance of A-CSMA in terms of the underlying wireless network topology. Our results not only guide a high-performance design on MAC scheduling under highly time-varying scenarios, but also provide new insights on the performance of CSMA algorithms in relation to their backoff rates and underlying network topologies.", "", "We study the problem of distributed scheduling in multi-hop MIMO networks. We first develop a MIMO-pipe\" model that provides the upper layers a set of rates and SINR requirements, which capture the rate-reliability tradeoff in MIMO communications. The main thrust of this study is then dedicated to developing CSMA-based MIMO-pipe scheduling under the SINR model. We choose the SINR model over the extensively studied matching or protocol-based interference models because it more naturally captures the impact of interference in wireless networks. The coupling among the links caused by the interference makes the problem of devising distributed scheduling algorithms particularly challenging. To that end, we explore CSMA-based MIMO-pipe scheduling, from two perspectives. First, we consider an idealized continuous time CSMA network. We propose a dual-band approach in which control messages are exchanged instantaneously over a channel separate from the data channel, and show that CSMA-based scheduling can achieve throughput optimality under the SINR model. Next, we consider a discrete time CSMA network. To tackle the challenge due to the coupling caused by interference, we propose a conservative\" scheduling algorithm in which more stringent SINR constraints are imposed based on the MIMO-pipe model. We show that this suboptimal distributed scheduling can achieve an efficiency ratio bounded from below.", "Developing scheduling mechanisms that can simultaneously achieve throughput optimality and good delay performance often require solving the Maximum Independent Weighted Set (MWIS) problem. However, under most realistic network settings, the MWIS problem can be shown to be NP-hard. In non-fading environments, low-complexity scheduling algorithms have been provided that converge either to the MWIS solution in time or to a solution that achieves at least a provable fraction of the achievable throughput. However, in more practical systems the channel conditions can vary at faster time-scales than convergence occurs in these lower-complexity algorithms. Hence, these algorithms cannot take advantage of the opportunistic gain, and may no longer guarantee good performance. In this paper, we propose a low-complexity scheduling scheme that performs provably well under fading channels and is amenable to implement in a distributed manner. To the best of our knowledge, this is the first scheduling scheme under fading environments that requires only local information, has a low complexity that grows logarithmically with the network size, and achieves provable performance guarantees (which is arbitrarily close to that of the well-known centralized Greedy Maximal Scheduler). Through simulations we verify that both the throughput and the delay under our proposed distributed scheduling scheme are close to that of the optimal solution to MWIS. Further, we implement a preliminary version of our algorithm in a testbed by modifying the existing IEEE 802.11 DCF. The preliminary experiment results show that our implementation successfully accounts for wireless fading, and attains the opportunistic gains in practice, and hence substantially outperforms IEEE 802.11 DCF." ] }
1504.07766
2144590090
After the phenomenal success of the PageRank algorithm, many researchers have extended the PageRank approach to ranking graphs with richer structures in addition to the simple linkage structure. Indeed, in some scenarios we have to deal with networks modeling multi-parameters data where each node has additional features and there are important relationships between such features.This paper addresses the need of a systematic approach to deal with multi-parameter data. We propose models and ranking algorithms that can be applied to a large variety of networks (bibliographic data, patent data, twitter and social data, healthcare data). We focus on several aspects not previously addressed in the literature: (1) we propose different models for ranking multi-parameters data and a class of numerical algorithms for efficiently computing the ranking score of such models, (2) we analyze stability and convergence of the proposed numerical schemes and we derive a fast and stable ranking algorithm, (3) we analyze the robustness of our models when data are incomplete. The comparison of the rank on the incomplete data with the rank on the full structure shows that our models compute consistent rankings whose correlation is up to 60 when just 10 of the links of the attributes are maintained.
In this paper we propose a tunable ranking algorithm where by changing parameters we can accomplish different goals. In particular, the same algorithm can be used on different kind of data and for different purposes. One of the parameters is the model itself and another one is the weighting strategy. This is the major difference with previous ranking algorithms which are designed for specific networks and appear to be less tunable @cite_17 @cite_31 @cite_24 @cite_28 . Together with the models we propose and analyze some weighting strategies. To change the ranking function one can implement other weighting strategies and incorporate them into the algorithm.
{ "cite_N": [ "@cite_24", "@cite_31", "@cite_28", "@cite_17" ], "mid": [ "2075010670", "2110896767", "2149288670", "2183602650" ], "abstract": [ "Most objects and data in the real world are of multiple types, interconnected, forming complex, heterogeneous but often semi-structured information networks. However, most network science researchers are focused on homogeneous networks, without distinguishing different types of objects and links in the networks. We view interconnected, multityped data, including the typical relational database data, as heterogeneous information networks, study how to leverage the rich semantic meaning of structural types of objects and links in the networks, and develop a structural analysis approach on mining semi-structured, multi-typed heterogeneous information networks. In this article, we summarize a set of methodologies that can effectively and efficiently mine useful knowledge from such information networks, and point out some promising research directions.", "In contrast with the current Web search methods that essentially do document-level ranking and retrieval, we are exploring a new paradigm to enable Web search at the object level. We collect Web information for objects relevant for a specific application domain and rank these objects in terms of their relevance and popularity to answer user queries. Traditional PageRank model is no longer valid for object popularity calculation because of the existence of heterogeneous relationships between objects. This paper introduces PopRank, a domain-independent object-level link analysis model to rank the objects within a specific domain. Specifically we assign a popularity propagation factor to each type of object relationship, study how different popularity propagation factors for these heterogeneous relationships could affect the popularity ranking, and propose efficient approaches to automatically decide these factors. Our experiments are done using 1 million CS papers, and the experimental results show that PopRank can achieve significantly better ranking results than naively applying PageRank on the object graph.", "A heterogeneous information network is an information network composed of multiple types of objects. Clustering on such a network may lead to better understanding of both hidden structures of the network and the individual role played by every object in each cluster. However, although clustering on homogeneous networks has been studied over decades, clustering on heterogeneous networks has not been addressed until recently. A recent study proposed a new algorithm, RankClus, for clustering on bi-typed heterogeneous networks. However, a real-world network may consist of more than two types, and the interactions among multi-typed objects play a key role at disclosing the rich semantics that a network carries. In this paper, we study clustering of multi-typed heterogeneous networks with a star network schema and propose a novel algorithm, NetClus, that utilizes links across multityped objects to generate high-quality net-clusters. An iterative enhancement method is developed that leads to effective ranking-based clustering in such heterogeneous networks. Our experiments on DBLP data show that NetClus generates more accurate clustering results than the baseline topic model algorithm PLSA and the recently proposed algorithm, RankClus. Further, NetClus generates informative clusters, presenting good ranking and cluster membership information for each attribute object in each net-cluster.", "Exploiting relationships between entities to improve search engines is a well known technique used by various graph ranking algo- rithms, such as PageRank or HITS. These algorithms however, typically use only one type of relationship. With the coming era of the Semantic Web, semantic data containing multiple types of relationships are be- coming more common. In this paper we present a novel graph ranking method based on creating layered graphs from multigraphs. We also presentasearchengineprototypeinthedomainofscientificpublications exploiting proposed layered graph ranking approach." ] }
1504.07766
2144590090
After the phenomenal success of the PageRank algorithm, many researchers have extended the PageRank approach to ranking graphs with richer structures in addition to the simple linkage structure. Indeed, in some scenarios we have to deal with networks modeling multi-parameters data where each node has additional features and there are important relationships between such features.This paper addresses the need of a systematic approach to deal with multi-parameter data. We propose models and ranking algorithms that can be applied to a large variety of networks (bibliographic data, patent data, twitter and social data, healthcare data). We focus on several aspects not previously addressed in the literature: (1) we propose different models for ranking multi-parameters data and a class of numerical algorithms for efficiently computing the ranking score of such models, (2) we analyze stability and convergence of the proposed numerical schemes and we derive a fast and stable ranking algorithm, (3) we analyze the robustness of our models when data are incomplete. The comparison of the rank on the incomplete data with the rank on the full structure shows that our models compute consistent rankings whose correlation is up to 60 when just 10 of the links of the attributes are maintained.
A different approach for ranking multigraphs is the one which makes use of multilinear algebra and tensors for representing graphs with multiple linkages @cite_30 @cite_26 @cite_5 . The tensor however does not contain the same information we use in this paper. For example, if we are dealing with bibliographic data, our models use the full author list for each paper, while the tensor only records the number of common authors between each pair of papers. Hence it does not allow to obtain a score for all the features such as authors or journals, and hence it is not possible to compare its results with those provided by our algorithm.
{ "cite_N": [ "@cite_30", "@cite_5", "@cite_26" ], "mid": [ "94565353", "1814521481", "2100452521" ], "abstract": [ "As the size of the web increases, it becomes more and more important to analyze link structure while also considering context. Multilinear algebra provides a novel tool for incorporating anchor text and other information into the authority computation used by link analysis methods such as HITS. Our recently proposed TOPHITS method uses a higher-order analogue of the matrix singular value decomposition called the PARAFAC model to analyze a three-way representation of web data. We compute hubs and authorities together with the terms that are used in the anchor text of the links between them. Adding a third dimension to the data greatly extends the applicability of HITS because the TOPHITS analysis can be performed in advance and offline. Like HITS, the TOPHITS model reveals latent groupings of pages, but TOPHITS also includes latent term information. In this paper, we describe a faster mathematical algorithm for computing the TOPHITS model on sparse data, and Web data is used to compare HITS and TOPHITS. We also discuss how the TOPHITS model can be used in queries, such as computing context-sensitive authorities and hubs. We describe different query response methodologies and present experimental results.", "Abstract The problem of incomplete data – i.e., data with missing or unknown values – in multi-way arrays is ubiquitous in biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, communication networks, etc. We consider the problem of how to factorize data sets with missing values with the goal of capturing the underlying latent structure of the data and possibly reconstructing missing values (i.e., tensor completion). We focus on one of the most well-known tensor factorizations that captures multi-linear structure, CANDECOMP PARAFAC (CP). In the presence of missing data, CP can be formulated as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factorize tensors with noise and up to 99 missing data. A unique aspect of our approach is that it scales to sparse large-scale data, e.g., 1000 × 1000 × 1000 with five million known entries (0.5 dense). We further demonstrate the usefulness of CP-WOPT on two real-world applications: a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes and the problem of modeling computer network traffic where data may be absent due to the expense of the data collection process.", "Link analysis typically focuses on a single type of connection, e.g., two journal papers are linked because they are written by the same author. However, often we want to analyze data that has multiple linkages between objects, e.g., two papers may have the same keywords and one may cite the other. The goal of this paper is to show that multilinear algebra provides a tool for multilink analysis. We analyze five years of publication data from journals published by the Society for Industrial and Applied Mathematics. We explore how papers can be grouped in the context of multiple link types using a tensor to represent all the links between them. A PARAFAC decomposition on the resulting tensor yields information similar to the SVD decomposition of a standard adjacency matrix. We show how the PARAFAC decomposition can be used to understand the structure of the document space and define paper-paper similarities based on multiple linkages. Examples are presented where the decomposed tensor data is used to find papers similar to a body of work (e.g., related by topic or similar to a particular author's papers), find related authors using linkages other than explicit co-authorship or citations, distinguish between papers written bymore » different authors with the same name, and predict the journal in which a paper was published.« less" ] }
1504.07766
2144590090
After the phenomenal success of the PageRank algorithm, many researchers have extended the PageRank approach to ranking graphs with richer structures in addition to the simple linkage structure. Indeed, in some scenarios we have to deal with networks modeling multi-parameters data where each node has additional features and there are important relationships between such features.This paper addresses the need of a systematic approach to deal with multi-parameter data. We propose models and ranking algorithms that can be applied to a large variety of networks (bibliographic data, patent data, twitter and social data, healthcare data). We focus on several aspects not previously addressed in the literature: (1) we propose different models for ranking multi-parameters data and a class of numerical algorithms for efficiently computing the ranking score of such models, (2) we analyze stability and convergence of the proposed numerical schemes and we derive a fast and stable ranking algorithm, (3) we analyze the robustness of our models when data are incomplete. The comparison of the rank on the incomplete data with the rank on the full structure shows that our models compute consistent rankings whose correlation is up to 60 when just 10 of the links of the attributes are maintained.
Sun and al. @cite_24 @cite_28 in the context of a bi-typed network (for example a bipartite bibliographic graph with only authors and conference venues) or star-typed networks (for example a bibliographic graph where we have papers and all the other features such as authors, conference venues, terms, are linked via papers) propose a ranking schema combined with clustering, where the clustering algorithm improves the ranking and vice-versa. One of the ranking function proposed is similar to ours but applies only to the simpler graphs described above with only two types of nodes. @cite_2 the authors, still in the context of bibliographic data, proposed a model similar to one of our models, namely the Simple Heap model . It mainly differs from ours for the weighting strategy and the use of a non-static model. However we consider an enriched structure with a complete set of relations between features. For example in the contest of bibliographic data we enrich the graph adding weighed links between authors, journals, and subject classification.
{ "cite_N": [ "@cite_24", "@cite_28", "@cite_2" ], "mid": [ "2075010670", "2149288670", "138896373" ], "abstract": [ "Most objects and data in the real world are of multiple types, interconnected, forming complex, heterogeneous but often semi-structured information networks. However, most network science researchers are focused on homogeneous networks, without distinguishing different types of objects and links in the networks. We view interconnected, multityped data, including the typical relational database data, as heterogeneous information networks, study how to leverage the rich semantic meaning of structural types of objects and links in the networks, and develop a structural analysis approach on mining semi-structured, multi-typed heterogeneous information networks. In this article, we summarize a set of methodologies that can effectively and efficiently mine useful knowledge from such information networks, and point out some promising research directions.", "A heterogeneous information network is an information network composed of multiple types of objects. Clustering on such a network may lead to better understanding of both hidden structures of the network and the individual role played by every object in each cluster. However, although clustering on homogeneous networks has been studied over decades, clustering on heterogeneous networks has not been addressed until recently. A recent study proposed a new algorithm, RankClus, for clustering on bi-typed heterogeneous networks. However, a real-world network may consist of more than two types, and the interactions among multi-typed objects play a key role at disclosing the rich semantics that a network carries. In this paper, we study clustering of multi-typed heterogeneous networks with a star network schema and propose a novel algorithm, NetClus, that utilizes links across multityped objects to generate high-quality net-clusters. An iterative enhancement method is developed that leads to effective ranking-based clustering in such heterogeneous networks. Our experiments on DBLP data show that NetClus generates more accurate clustering results than the baseline topic model algorithm PLSA and the recently proposed algorithm, RankClus. Further, NetClus generates informative clusters, presenting good ranking and cluster membership information for each attribute object in each net-cluster.", "In this paper, we present a novel approach that models the mutual reinforcing relationship among papers, authors and publication venues with due cognizance of publication time. We further integrate bookmark information which models the relationship between users' expertise and papers' quality into the composite citation network using random walk with restart framework. The experimental results with ACM dataset show that 1) the proposed method outperforms the traditional methods; 2) by incorporating the temporal factor, the ranking result of latest publications can be greatly improved; 3) the integration of user generated content further enhances the ranking result." ] }
1504.07766
2144590090
After the phenomenal success of the PageRank algorithm, many researchers have extended the PageRank approach to ranking graphs with richer structures in addition to the simple linkage structure. Indeed, in some scenarios we have to deal with networks modeling multi-parameters data where each node has additional features and there are important relationships between such features.This paper addresses the need of a systematic approach to deal with multi-parameter data. We propose models and ranking algorithms that can be applied to a large variety of networks (bibliographic data, patent data, twitter and social data, healthcare data). We focus on several aspects not previously addressed in the literature: (1) we propose different models for ranking multi-parameters data and a class of numerical algorithms for efficiently computing the ranking score of such models, (2) we analyze stability and convergence of the proposed numerical schemes and we derive a fast and stable ranking algorithm, (3) we analyze the robustness of our models when data are incomplete. The comparison of the rank on the incomplete data with the rank on the full structure shows that our models compute consistent rankings whose correlation is up to 60 when just 10 of the links of the attributes are maintained.
In previous papers from the same authors @cite_18 @cite_20 @cite_15 a model is introduced in the contest of bibliographic data which is similar to one of the models (the one we called Stiff model ) of this paper. In particular in @cite_18 an integrated model for ranking scientific publications together with authors and journals was presented. In that context, particular weighting strategies were implemented @cite_15 and an exponential decay factor was introduced @cite_20 to take into account aging of citations, i.e. the fact that if an old paper is not cited recently its importance should fade over the time. In this paper we further generalize the original ideas introducing several models and other classes making the model suitable also for ranking other multi parameters data (patents, healthcare, social data etc. ). The new models are more adequate for example to handle updating of the datasets which can be done at a lower cost than in the Stiff model. In addition, in this paper the weighting strategies are problem independent, while in the previous papers they were designed ad hoc for dealing with bibliographic items.
{ "cite_N": [ "@cite_18", "@cite_20", "@cite_15" ], "mid": [ "1503311242", "1504400076", "2024937529" ], "abstract": [ "Some integrated models for ranking scientific publications together with authors and journals are presented and analyzed. The models rely on certain adjacency matrices obtained from the relationships between citations, authors and publications, which together give a suitable irreducible stochastic matrix whose Perron vector provides the ranking. Some perturbation theorems concerning the Perron vectors of nonnegative irreducible matrices are proved. These theoretical results provide a validation of the consistency and effectiveness of our models. Several examples are reported together with some results obtained on a real set of data.", "In this paper, we first give a quick review of the most used numerical indicators for evaluating research, and then we present an integrated model for ranking scientific publications together with authors and journals. Our model relies on certain adiacency matrices obtained from the relations of citation, authorship and publication. These matrices are first normalized to obtain stochastic matrices and then are combined together by means of weights to form a suitable irreducible stochastic matrix whose dominant eigenvector provides the ranking. We discuss various strategies for choosing the weights and we show on large synthetic datasets how the choice of a weighting criteria rather than another can change the behavior of our ranking algorithm.", "An integrated model for ranking scientific publications together with authors and journals recently presented in [Bini, Del Corso, Romani, ETNA 2008] is closely analyzed. The model, which relies on certain adjacency matrices H,K and F obtained from the relations of citation, authorship and publication, provides the ranking by means of the Perron vector of a stochastic matrix obtained by combining H,K and F. Some perturbation theorems concerning the Perron vector previously introduced by the authors are extended to more general cases and a counterexample to a property previously addressed by the authors is presented. The theoretical results confirm the consistency and effectiveness of our model. Some paradigmatic examples are reported together with some results obtained on a real set of data." ] }
1504.07766
2144590090
After the phenomenal success of the PageRank algorithm, many researchers have extended the PageRank approach to ranking graphs with richer structures in addition to the simple linkage structure. Indeed, in some scenarios we have to deal with networks modeling multi-parameters data where each node has additional features and there are important relationships between such features.This paper addresses the need of a systematic approach to deal with multi-parameter data. We propose models and ranking algorithms that can be applied to a large variety of networks (bibliographic data, patent data, twitter and social data, healthcare data). We focus on several aspects not previously addressed in the literature: (1) we propose different models for ranking multi-parameters data and a class of numerical algorithms for efficiently computing the ranking score of such models, (2) we analyze stability and convergence of the proposed numerical schemes and we derive a fast and stable ranking algorithm, (3) we analyze the robustness of our models when data are incomplete. The comparison of the rank on the incomplete data with the rank on the full structure shows that our models compute consistent rankings whose correlation is up to 60 when just 10 of the links of the attributes are maintained.
Another contribution of this paper is the investigation of adequate numerical techniques to compute the ranking score. In particular, in we show how the computation of the ranks relies upon the solution of a structured linear system and in we discuss and compare the different algorithms which can be used to solve that system. Dealing with big data requires indeed particular care in the choice of the numerical methods used in the algorithms that should be stabile and fast. The final algorithm (Procedure SystemSolver in ) has been chosen on the basis of several tests aiming to validate its properties of convergence and stability. A similar analysis has not been done in the literature, and often even methods requiring matrix manipulations @cite_28 or spectral algorithms @cite_2 miss to analyze this important aspect.
{ "cite_N": [ "@cite_28", "@cite_2" ], "mid": [ "2149288670", "138896373" ], "abstract": [ "A heterogeneous information network is an information network composed of multiple types of objects. Clustering on such a network may lead to better understanding of both hidden structures of the network and the individual role played by every object in each cluster. However, although clustering on homogeneous networks has been studied over decades, clustering on heterogeneous networks has not been addressed until recently. A recent study proposed a new algorithm, RankClus, for clustering on bi-typed heterogeneous networks. However, a real-world network may consist of more than two types, and the interactions among multi-typed objects play a key role at disclosing the rich semantics that a network carries. In this paper, we study clustering of multi-typed heterogeneous networks with a star network schema and propose a novel algorithm, NetClus, that utilizes links across multityped objects to generate high-quality net-clusters. An iterative enhancement method is developed that leads to effective ranking-based clustering in such heterogeneous networks. Our experiments on DBLP data show that NetClus generates more accurate clustering results than the baseline topic model algorithm PLSA and the recently proposed algorithm, RankClus. Further, NetClus generates informative clusters, presenting good ranking and cluster membership information for each attribute object in each net-cluster.", "In this paper, we present a novel approach that models the mutual reinforcing relationship among papers, authors and publication venues with due cognizance of publication time. We further integrate bookmark information which models the relationship between users' expertise and papers' quality into the composite citation network using random walk with restart framework. The experimental results with ACM dataset show that 1) the proposed method outperforms the traditional methods; 2) by incorporating the temporal factor, the ranking result of latest publications can be greatly improved; 3) the integration of user generated content further enhances the ranking result." ] }
1504.07575
2952023105
Compared to machines, humans are extremely good at classifying images into categories, especially when they possess prior knowledge of the categories at hand. If this prior information is not available, supervision in the form of teaching images is required. To learn categories more quickly, people should see important and representative images first, followed by less important images later - or not at all. However, image-importance is individual-specific, i.e. a teaching image is important to a student if it changes their overall ability to discriminate between classes. Further, students keep learning, so while image-importance depends on their current knowledge, it also varies with time. In this work we propose an Interactive Machine Teaching algorithm that enables a computer to teach challenging visual concepts to a human. Our adaptive algorithm chooses, online, which labeled images from a teaching set should be shown to the student as they learn. We show that a teaching strategy that probabilistically models the student's ability and progress, based on their correct and incorrect answers, produces better 'experts'. We present results using real human participants across several varied and challenging real-world datasets.
More recently, Zhu @cite_37 attempted to minimize the joint effort of the teacher and the loss of the student by optimizing directly over the teaching set. The proposed noise-tolerant model assumes that the student's learning model is known to the teacher, and that it is in the exponential family. In follow-on work, Patil al @cite_15 maintain that unlike computers, which have infinite memory capabilities, humans are limited in their retrieval capacity. Motivated by real human studies @cite_33 , they show that modeling this limited capacity improves human learning performance on tasks involving simple one-dimensional stimuli.
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_15" ], "mid": [ "2951122980", "2018189188", "2142544755" ], "abstract": [ "What if there is a teacher who knows the learning goal and wants to design good training data for a machine learner? We propose an optimal teaching framework aimed at learners who employ Bayesian models. Our framework is expressed as an optimization problem over teaching examples that balance the future loss of the learner and the effort of the teacher. This optimization problem is in general hard. In the case where the learner employs conjugate exponential family models, we present an approximate algorithm for finding the optimal teaching set. Our algorithm optimizes the aggregate sufficient statistics, then unpacks them into actual teaching examples. We give several examples to illustrate our framework.", "Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.", "Basic decisions, such as judging a person as a friend or foe, involve categorizing novel stimuli. Recent work finds that people's category judgments are guided by a small set of examples that are retrieved from memory at decision time. This limited and stochastic retrieval places limits on human performance for probabilistic classification decisions. In light of this capacity limitation, recent work finds that idealizing training items, such that the saliency of ambiguous cases is reduced, improves human performance on novel test items. One shortcoming of previous work in idealization is that category distributions were idealized in an ad hoc or heuristic fashion. In this contribution, we take a first principles approach to constructing idealized training sets. We apply a machine teaching procedure to a cognitive model that is either limited capacity (as humans are) or unlimited capacity (as most machine learning systems are). As predicted, we find that the machine teacher recommends idealized training sets. We also find that human learners perform best when training recommendations from the machine teacher are based on a limited-capacity model. As predicted, to the extent that the learning model used by the machine teacher conforms to the true nature of human learners, the recommendations of the machine teacher prove effective. Our results provide a normative basis (given capacity constraints) for idealization procedures and offer a novel selection procedure for models of human learning." ] }
1504.07575
2952023105
Compared to machines, humans are extremely good at classifying images into categories, especially when they possess prior knowledge of the categories at hand. If this prior information is not available, supervision in the form of teaching images is required. To learn categories more quickly, people should see important and representative images first, followed by less important images later - or not at all. However, image-importance is individual-specific, i.e. a teaching image is important to a student if it changes their overall ability to discriminate between classes. Further, students keep learning, so while image-importance depends on their current knowledge, it also varies with time. In this work we propose an Interactive Machine Teaching algorithm that enables a computer to teach challenging visual concepts to a human. Our adaptive algorithm chooses, online, which labeled images from a teaching set should be shown to the student as they learn. We show that a teaching strategy that probabilistically models the student's ability and progress, based on their correct and incorrect answers, produces better 'experts'. We present results using real human participants across several varied and challenging real-world datasets.
Most related to our work, Singla al @cite_2 teach binary visual concepts by showing images to real human learners. Their method operates offline and tries to find the set of teaching examples that best conveys a known linear classification boundary. Experiments with Mechanical Turkers show an improvement compared to other baselines, including random sampling. Their approach attempts to encode some noise tolerance into the teaching set, but is still unable to adapt to a student's responses online during teaching, because the ordering of the teaching images is fixed offline.
{ "cite_N": [ "@cite_2" ], "mid": [ "2949540931" ], "abstract": [ "How should we present training examples to learners to teach them classification rules? This is a natural problem when training workers for crowdsourcing labeling tasks, and is also motivated by challenges in data-driven online education. We propose a natural stochastic model of the learners, modeling them as randomly switching among hypotheses based on observed feedback. We then develop STRICT, an efficient algorithm for selecting examples to teach to workers. Our solution greedily maximizes a submodular surrogate objective function in order to select examples to show to the learners. We prove that our strategy is competitive with the optimal teaching policy. Moreover, for the special case of linear separators, we prove that an exponential reduction in error probability can be achieved. Our experiments on simulated workers as well as three real image annotation tasks on Amazon Mechanical Turk show the effectiveness of our teaching algorithm." ] }
1504.07662
850914856
We analyze the problem of using Explore-Exploit techniques to improve precision in multi-result ranking systems such as web search, query autocompletion and news recommendation. Adopting an exploration policy directly online, without understanding its impact on the production system, may have unwanted consequences - the system may sustain large losses, create user dissatisfaction, or collect exploration data which does not help improve ranking quality. An offline framework is thus necessary to let us decide what policy and how we should apply in a production environment to ensure positive outcome. Here, we describe such an offline framework. Using the framework, we study a popular exploration policy - Thompson sampling. We show that there are different ways of implementing it in multi-result ranking systems, each having different semantic interpretation and leading to different results in terms of sustained click-through-rate (CTR) loss and expected model improvement. In particular, we demonstrate that Thompson sampling can act as an online learner optimizing CTR, which in some cases can lead to an interesting outcome: lift in CTR during exploration. The observation is important for production systems as it suggests that one can get both valuable exploration data to improve ranking performance on the long run, and at the same time increase CTR while exploration lasts.
Our work can be considered a special case of the generic framework @cite_16 @cite_3 @cite_2 @cite_7 . Unlike these works where the context is assumed to come in the form of additional observations @cite_7 or features, e.g. personalized information @cite_2 , the context in our case is in the rich structure of the problem. For example, most of the works mentioned in this section focus on the single result case, i.e. there are @math -arms to choose from but ultimately only one result is displayed to the user. We focus on multi-result ranking systems instead. As we saw here many real-world problems follow the multi-result ranking settings, which require special handling due to presence of position and ranking score bias. More recently @cite_19 has discussed the multi-result setting. The authors describe a non-stochastic procedure for optimizing a loss function which is believed to lead to proper exploration. While the work is theoretically sound it does not show whether the approach leads to improvement of the underlying model. The method also lacks some of the observed convergence properties of Thompson sampling which starts apportioning examples to buckets which it overtime finds likely to lead to higher CTR.
{ "cite_N": [ "@cite_7", "@cite_3", "@cite_19", "@cite_2", "@cite_16" ], "mid": [ "1994382727", "2119850747", "2122422466", "2112420033", "2021801581" ], "abstract": [ "A bandit problem with side observations is an extension of the traditional two-armed bandit problem, in which the decision maker has access to side information before deciding which arm to pull. In this paper, essential properties of the side observations that allow achievability results with respect to optimal regret are extracted and formalized. The sufficient conditions for good side information obtained here admit various types of random processes as special cases, including i.i.d. sequences, Markov chains, deterministic periodic sequences, etc. A simple necessary condition for optimal regret is given, providing further insight into the nature of bandit problems with side observations. A game-theoretic approach simplifies the analysis and justifies the viewpoint that the side observation serves as an index specifying different sub-bandit machines.", "We present Epoch-Greedy, an algorithm for multi-armed bandits with observable side information. Epoch-Greedy has the following properties: No knowledge of a time horizon @math is necessary. The regret incurred by Epoch-Greedy is controlled by a sample complexity bound for a hypothesis class. The regret scales as @math or better (sometimes, much better). Here @math is the complexity term in a sample complexity bound for standard supervised learning.", "We consider bandit problems, motivated by applications in online advertising and news story selection, in which the learner must repeatedly select a slate, that is, a subset of size s from K possible actions, and then receives rewards for just the selected actions. The goal is to minimize the regret with respect to total reward of the best slate computed in hindsight. We consider unordered and ordered versions of the problem, and give efficient algorithms which have regret O(√T), where the constant depends on the specific nature of the problem. We also consider versions of the problem where we have access to a number of policies which make recommendations for slates in every round, and give algorithms with O(√T) regret for competing with the best such policy as well. We make use of the technique of relative entropy projections combined with the usual multiplicative weight update algorithm to obtain our algorithms.", "Personalized web services strive to adapt their services (advertisements, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of content, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computation. In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks. The contributions of this work are three-fold. First, we propose a new, general contextual bandit algorithm that is computationally efficient and well motivated from learning theory. Second, we argue that any bandit algorithm can be reliably evaluated offline using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a Yahoo! Front Page Today Module dataset containing over 33 million events. Results showed a 12.5 click lift compared to a standard context-free bandit algorithm, and the advantage becomes even greater when data gets more scarce.", "A class of learning tasks is described that combines aspects of learning automation tasks and supervised learning pattern-classification tasks. These tasks are called associative reinforcement learning tasks. An algorithm is presented, called the associative reward-penalty, or AR-P algorithm for which a form of optimal performance is proved. This algorithm simultaneously generalizes a class of stochastic learning automata and a class of supervised learning pattern-classification methods related to the Robbins-Monro stochastic approximation procedure. The relevance of this hybrid algorithm is discussed with respect to the collective behaviour of learning automata and the behaviour of networks of pattern-classifying adaptive elements. Simulation results are presented that illustrate the associative reinforcement learning task and the performance of the AR-P algorithm as compared with that of several existing algorithms." ] }
1504.07662
850914856
We analyze the problem of using Explore-Exploit techniques to improve precision in multi-result ranking systems such as web search, query autocompletion and news recommendation. Adopting an exploration policy directly online, without understanding its impact on the production system, may have unwanted consequences - the system may sustain large losses, create user dissatisfaction, or collect exploration data which does not help improve ranking quality. An offline framework is thus necessary to let us decide what policy and how we should apply in a production environment to ensure positive outcome. Here, we describe such an offline framework. Using the framework, we study a popular exploration policy - Thompson sampling. We show that there are different ways of implementing it in multi-result ranking systems, each having different semantic interpretation and leading to different results in terms of sustained click-through-rate (CTR) loss and expected model improvement. In particular, we demonstrate that Thompson sampling can act as an online learner optimizing CTR, which in some cases can lead to an interesting outcome: lift in CTR during exploration. The observation is important for production systems as it suggests that one can get both valuable exploration data to improve ranking performance on the long run, and at the same time increase CTR while exploration lasts.
The effectiveness of Thompson sampling has been noted previously by @cite_1 @cite_18 @cite_21 and others. Subsequently efforts have focused on understanding better the theoretical properties of the algorithm (e.g., @cite_10 ) leaving aside the important implementation considerations which we raised, namely, that in the context of multi-result ranking there are multiple ways to define the buckets (arms), and that different definitions lead to different semantic interpretation and different results.
{ "cite_N": [ "@cite_10", "@cite_18", "@cite_21", "@cite_1" ], "mid": [ "2158319693", "2158125716", "2108738385", "2162979096" ], "abstract": [ "The question of the optimality of Thompson Sampling for solving the stochastic multi-armed bandit problem had been open since 1933. In this paper we answer it positively for the case of Bernoulli rewards by providing the first finite-time analysis that matches the asymptotic rate given in the Lai and Robbins lower bound for the cumulative regret. The proof is accompanied by a numerical comparison with other optimal policies, experiments that have been lacking in the literature until now for the Bernoulli case.", "A multi-armed bandit is an experiment with the goal of accumulating rewards from a payoff distribution with unknown parameters that are to be learned sequentially. This article describes a heuristic for managing multi-armed bandits called randomized probability matching, which randomly allocates observations to arms according the Bayesian posterior probability that each arm is optimal. Advances in Bayesian computation have made randomized probability matching easy to apply to virtually any payoff distribution. This flexibility frees the experimenter to work with payoff distributions that correspond to certain classical experimental designs that have the potential to outperform methods that are ‘optimal’ in simpler contexts. I summarize the relationships between randomized probability matching and several related heuristics that have been used in the reinforcement learning literature. Copyright © 2010 John Wiley & Sons, Ltd.", "Thompson sampling is one of oldest heuristic to address the exploration exploitation trade-off, but it is surprisingly unpopular in the literature. We present here some empirical results using Thompson sampling on simulated and real data, and show that it is highly competitive. And since this heuristic is very easy to implement, we argue that it should be part of the standard baselines to compare against.", "We describe a new Bayesian click-through rate (CTR) prediction algorithm used for Sponsored Search in Microsoft's Bing search engine. The algorithm is based on a probit regression model that maps discrete or real-valued input features to probabilities. It maintains Gaussian beliefs over weights of the model and performs Gaussian online updates derived from approximate message passing. Scalability of the algorithm is ensured through a principled weight pruning procedure and an approximate parallel implementation. We discuss the challenges arising from evaluating and tuning the predictor as part of the complex system of sponsored search where the predictions made by the algorithm decide about future training sample composition. Finally, we show experimental results from the production system and compare to a calibrated Naive Bayes algorithm." ] }
1504.07225
2953021999
Common Representation Learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, is receiving a lot of attention recently. Two popular paradigms here are Canonical Correlation Analysis (CCA) based approaches and Autoencoder (AE) based approaches. CCA based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA based approaches outperform AE based approaches for the task of transfer learning, they are not as scalable as the latter. In this work we propose an AE based approach called Correlational Neural Network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than the above mentioned approaches with respect to its ability to learn correlated common representations. Further, we employ CorrNet for several cross language tasks and show that the representations learned using CorrNet perform better than the ones learned using other state of the art approaches.
Looking more generally at neural networks that learn multilingual representations of words or phrases, we mention the work of which showed that a useful linear mapping between separately trained monolingual skip-gram language models could be learned. They too however rely on the specification of pairs of words in the two languages to align. also propose a method for training a neural network to learn useful representations of phrases, in the context of a phrase-based translation model. In this case, phrase-level alignments (usually extracted from word-level alignments) are required. Recently, @cite_6 @cite_7 , proposed neural network architectures and a margin-based training objective that, as in this work, does not rely on word alignments. We will briefly discuss this work in the experiments section. A tree based bilingual autoencoder with similar objective function is also proposed in .
{ "cite_N": [ "@cite_7", "@cite_6" ], "mid": [ "1562955078", "2949402715" ], "abstract": [ "Distributed representations of meaning are a natural way to encode covariance relationships between words and phrases in NLP. By overcoming data sparsity problems, as well as providing information about semantic relatedness which is not available in discrete representations, distributed representations have proven useful in many NLP tasks. Recent work has shown how compositional semantic representations can successfully be applied to a number of monolingual applications such as sentiment analysis. At the same time, there has been some initial success in work on learning shared word-level representations across languages. We combine these two approaches by proposing a method for learning distributed representations in a multilingual setup. Our model learns to assign similar embeddings to aligned sentences and dissimilar ones to sentence which are not aligned while not requiring word alignments. We show that our representations are semantically informative and apply them to a cross-lingual document classification task where we outperform the previous state of the art. Further, by employing parallel corpora of multiple language pairs we find that our model learns representations that capture semantic relationships across languages for which no parallel data was used.", "We present a novel technique for learning semantic representations, which extends the distributional hypothesis to multilingual data and joint-space embeddings. Our models leverage parallel data and learn to strongly align the embeddings of semantically equivalent sentences, while maintaining sufficient distance between those of dissimilar sentences. The models do not rely on word alignments or any syntactic information and are successfully applied to a number of diverse languages. We extend our approach to learn semantic representations at the document level, too. We evaluate these models on two cross-lingual document classification tasks, outperforming the prior state of the art. Through qualitative analysis and the study of pivoting effects we demonstrate that our representations are semantically plausible and can capture semantic relationships across languages without parallel data." ] }
1504.07269
2952878171
Image based reconstruction of urban environments is a challenging problem that deals with optimization of large number of variables, and has several sources of errors like the presence of dynamic objects. Since most large scale approaches make the assumption of observing static scenes, dynamic objects are relegated to the noise modeling section of such systems. This is an approach of convenience since the RANSAC based framework used to compute most multiview geometric quantities for static scenes naturally confine dynamic objects to the class of outlier measurements. However, reconstructing dynamic objects along with the static environment helps us get a complete picture of an urban environment. Such understanding can then be used for important robotic tasks like path planning for autonomous navigation, obstacle tracking and avoidance, and other areas. In this paper, we propose a system for robust SLAM that works in both static and dynamic environments. To overcome the challenge of dynamic objects in the scene, we propose a new model to incorporate semantic constraints into the reconstruction algorithm. While some of these constraints are based on multi-layered dense CRFs trained over appearance as well as motion cues, other proposed constraints can be expressed as additional terms in the bundle adjustment optimization process that does iterative refinement of 3D structure and camera object motion trajectories. We show results on the challenging KITTI urban dataset for accuracy of motion segmentation and reconstruction of the trajectory and shape of moving objects relative to ground truth. We are able to show average relative error reduction by a significant amount for moving object trajectory reconstruction relative to state-of-the-art methods like VISO 2, as well as standard bundle adjustment algorithms.
Recent approaches to 3D reconstruction have either used semantic information in a qualitative manner @cite_24 , or have only proposed to reconstruct indoor scenes using such information @cite_0 . Only Yuan al @cite_13 propose to add semantic constraints for reconstruction. While our approach is similar to theirs, they use strict constraints for motion segmentation without regard to appearance information whereas our approach works for more general scenarios as it employs a more powerful inference engine in the CRF.
{ "cite_N": [ "@cite_24", "@cite_0", "@cite_13" ], "mid": [ "", "1985238052", "2145804667" ], "abstract": [ "", "Existing scene understanding datasets contain only a limited set of views of a place, and they lack representations of complete 3D spaces. In this paper, we introduce SUN3D, a large-scale RGB-D video database with camera pose and object labels, capturing the full 3D extent of many places. The tasks that go into constructing such a dataset are difficult in isolation -- hand-labeling videos is painstaking, and structure from motion (SfM) is unreliable for large spaces. But if we combine them together, we make the dataset construction task much easier. First, we introduce an intuitive labeling tool that uses a partial reconstruction to propagate labels from one frame to another. Then we use the object labels to fix errors in the reconstruction. For this, we introduce a generalization of bundle adjustment that incorporates object-to-object correspondences. This algorithm works by constraining points for the same object from different frames to lie inside a fixed-size bounding box, parameterized by its rotation and translation. The SUN3D database, the source code for the generalized bundle adjustment, and the web-based 3D annotation tool are all available at http: sun3d.cs.princeton.edu.", "We present a novel method to obtain a 3D Euclidean reconstruction of both the background and moving objects in a video sequence. We assume that, multiple objects are moving rigidly on a ground plane observed by a moving camera. The video sequence is first segmented into static background and motion blobs by a homography-based motion segmentation method. Then classical \"Structure from Motion\" (SfM) techniques are applied to obtain a Euclidean reconstruction of the static background. The motion blob corresponding to each moving object is treated as if there were a static object observed by a hypothetical moving camera, called a \"virtual camera\". This virtual camera shares the same intrinsic parameters with the real camera but moves differently due to object motion. The same SfM techniques are applied to estimate the 3D shape of each moving object and the pose of the virtual camera. We show that the unknown scale of moving objects can be approximately determined by the ground plane, which is a key contribution of this paper. Another key contribution is that we prove that the 3D motion of moving objects can be solved from the virtual camera motion with a linear constraint imposed on the object translation. In our approach, a planartranslation constraint is formulated: \"the 3D instantaneous translation of moving objects must be parallel to the ground plane\". Results on real-world video sequences demonstrate the effectiveness and robustness of our approach." ] }
1504.07385
2951333688
We present a methodology to estimate the number of attendees to events happening in the city from cellular network data. In this work we used anonymized Call Detail Records (CDRs) comprising data on where and when users access the cellular network. Our approach is based on two key ideas: (1) we identify the network cells associated to the event location. (2) We verify the attendance of each user, as a measure of whether (s)he generates CDRs during the event, but not during other times. We evaluate our approach to estimate the number of attendees to a number of events ranging from football matches in stadiums to concerts and festivals in open squares. Comparing our results with the best groundtruth data available, our estimates provide a median error of less than 15 of the actual number of attendees.
The work presented in @cite_16 @cite_9 presents another approach to analyze people attendance to special events on the basis of CDRs coming from the AirSage ( www.airsage .com ) platform. In this work, they segment users' traces to identify those places where a user stops. If this place, coincides with the place of the event and if the duration of the stop is at least the 70 the event, the user is classified as attending the event. On this basis they are able to analyze the attendance to specific events. However, they claim: Estimating the actual number of attendees is still an open problem, considering also that ground truth data to validate models is sometime absent or very noisy '' and do not perform quantitative analysis in this direction.
{ "cite_N": [ "@cite_9", "@cite_16" ], "mid": [ "1537940195", "2156785885" ], "abstract": [ "This paper deals with the analysis of crowd mobility during special events. We analyze nearly 1 million cell-phone traces and associate their destinations with social events. We show that the origins of people attending an event are strongly correlated to the type of event, with implications in city management, since the knowledge of additive flows can be a critical information on which to take decisions about events management and congestion mitigation.", "A system for measuring audiences of outdoor advertising in specific areas is based on the combination of mobile phone location estimations with Internet listings of social events." ] }
1504.07385
2951333688
We present a methodology to estimate the number of attendees to events happening in the city from cellular network data. In this work we used anonymized Call Detail Records (CDRs) comprising data on where and when users access the cellular network. Our approach is based on two key ideas: (1) we identify the network cells associated to the event location. (2) We verify the attendance of each user, as a measure of whether (s)he generates CDRs during the event, but not during other times. We evaluate our approach to estimate the number of attendees to a number of events ranging from football matches in stadiums to concerts and festivals in open squares. Comparing our results with the best groundtruth data available, our estimates provide a median error of less than 15 of the actual number of attendees.
The work in @cite_13 is very interesting and closer to our approach. They use a Bayesian model to localize the source of CDRs. Then, they compute the probability @math of each user to participate an event as @math . Where @math is the fraction of time in which the user is in the event area at the event time. @math is the fraction of time in which the use is in the event area at other times. Finally, they use an outlier detection mechanism (based on a z-score) to classify users as participants to an event. Unfortunately, they use the approach only to identify an event and not to estimate the actual attendance.
{ "cite_N": [ "@cite_13" ], "mid": [ "2007022751" ], "abstract": [ "The unprecedented amount of data from mobile phones creates new possibilities to analyze various aspects of human behavior. Over the last few years, much effort has been devoted to studying the mobility patterns of humans. In this paper we will focus on unusually large gatherings of people, i.e. unusual social events. We introduce the methodology of detecting such social events in massive mobile phone data, based on a Bayesian location inference framework. More specifically, we also develop a framework for deciding who is attending an event. We demonstrate the method on a few examples. Finally, we discuss some possible future approaches for event detection, and some possible analyses of the detected social events." ] }
1504.07385
2951333688
We present a methodology to estimate the number of attendees to events happening in the city from cellular network data. In this work we used anonymized Call Detail Records (CDRs) comprising data on where and when users access the cellular network. Our approach is based on two key ideas: (1) we identify the network cells associated to the event location. (2) We verify the attendance of each user, as a measure of whether (s)he generates CDRs during the event, but not during other times. We evaluate our approach to estimate the number of attendees to a number of events ranging from football matches in stadiums to concerts and festivals in open squares. Comparing our results with the best groundtruth data available, our estimates provide a median error of less than 15 of the actual number of attendees.
A similar approach to identify events is reported in @cite_1 . In this work, authors apply an outlier detection mechanism to aggregated cell network data (i.e., erlang measurements). Events are associated to overcrowded or suddenly underpopulated areas.
{ "cite_N": [ "@cite_1" ], "mid": [ "2095347401" ], "abstract": [ "Can we automatically identify relevant places and events happening in the city from the analysis of mobile network use? In this paper we present a methodology to discover events from human mobility patterns as recorded by mobile network usage. Experiments conducted over an extensive dataset from the main Italian telecom operator show that the proposed approach is effective and can be applied to a number of different scenarios. These results can have a strong impact on a wide range of pervasive applications ranging from location-based services to urban planning." ] }
1504.06658
2949403529
Most of previous work in knowledge base (KB) completion has focused on the problem of relation extraction. In this work, we focus on the task of inferring missing entity type instances in a KB, a fundamental task for KB competition yet receives little attention. Due to the novelty of this task, we construct a large-scale dataset and design an automatic evaluation methodology. Our knowledge base completion method uses information within the existing KB and external information from Wikipedia. We show that individual methods trained with a global objective that considers unobserved cells from both the entity and the type side gives consistently higher quality predictions compared to baseline methods. We also perform manual evaluation on a small subset of the data to verify the effectiveness of our knowledge base completion methods and the correctness of our proposed automatic evaluation method.
Much of previous work @cite_9 @cite_19 in entity type prediction has focused on the task of predicting entity types at the sentence level. develop a method based on matrix factorization for entity type prediction in a KB using information within the KB and New York Times articles. However, the method was still evaluated only at the sentence level. , use the first line of an entity's Wikipedia article to perform named entity recognition on three entity types.
{ "cite_N": [ "@cite_19", "@cite_9" ], "mid": [ "2042792636", "2146772539" ], "abstract": [ "Categorizing entities by their types is useful in many applications, including knowledge base construction, relation extraction and query intent prediction. Fine-grained entity type ontologies are especially valuable, but typically difficult to design because of unavoidable quandaries about level of detail and boundary cases. Automatically classifying entities by type is challenging as well, usually involving hand-labeling data and training a supervised predictor. This paper presents a universal schema approach to fine-grained entity type prediction. The set of types is taken as the union of textual surface patterns (e.g. appositives) and pre-defined types from available databases (e.g. Freebase)---yielding not tens or hundreds of types, but more than ten thousands of entity types, such as financier, criminologist, and musical trio. We robustly learn mutual implication among this large union by learning latent vector embeddings from probabilistic matrix factorization, thus avoiding the need for hand-labeled data. Experimental results demonstrate more than 30 reduction in error versus a traditional classification approach on predicting fine-grained entities types.", "We predict entity type distributions in Web search queries via probabilistic inference in graphical models that capture how entity-bearing queries are generated. We jointly model the interplay between latent user intents that govern queries and unobserved entity types, leveraging observed signals from query formulations and document clicks. We apply the models to resolve entity types in new queries and to assign prior type distributions over an existing knowledge base. Our models are efficiently trained using maximum likelihood estimation over millions of real-world Web search queries. We show that modeling user intent significantly improves entity type resolution for head queries over the state of the art, on several metrics, without degradation in tail query performance." ] }
1504.06658
2949403529
Most of previous work in knowledge base (KB) completion has focused on the problem of relation extraction. In this work, we focus on the task of inferring missing entity type instances in a KB, a fundamental task for KB competition yet receives little attention. Due to the novelty of this task, we construct a large-scale dataset and design an automatic evaluation methodology. Our knowledge base completion method uses information within the existing KB and external information from Wikipedia. We show that individual methods trained with a global objective that considers unobserved cells from both the entity and the type side gives consistently higher quality predictions compared to baseline methods. We also perform manual evaluation on a small subset of the data to verify the effectiveness of our knowledge base completion methods and the correctness of our proposed automatic evaluation method.
Much of precious work in KB completion has focused on the problem of relation extraction. Majority of the methods infer missing relation facts using information within the KB @cite_22 @cite_20 @cite_16 @cite_2 while methods such as use information in text documents. use both information within and outside the KB to complete the KB.
{ "cite_N": [ "@cite_16", "@cite_22", "@cite_20", "@cite_2" ], "mid": [ "2127426251", "205829674", "1756422141", "2127795553" ], "abstract": [ "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively.", "Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.", "We consider the problem of performing learning and inference in a large scale knowledge base containing imperfect knowledge with incomplete coverage. We show that a soft inference procedure based on a combination of constrained, weighted, random walks through the knowledge base graph can be used to reliably infer new beliefs for the knowledge base. More specifically, we show that the system can learn to infer different target relations by tuning the weights associated with random walks that follow different paths through the graph, using a version of the Path Ranking Algorithm (Lao and Cohen, 2010b). We apply this approach to a knowledge base of approximately 500,000 beliefs extracted imperfectly from the web by NELL, a never-ending language learner (, 2010). This new system improves significantly over NELL's earlier Horn-clause learning and inference method: it obtains nearly double the precision at rank 100, and the new learning method is also applicable to many more inference tasks.", "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples." ] }
1504.07082
2064988385
In this paper, we present a decision level fused local Morphological Pattern Spectrum (PS) and Local Binary Pattern (LBP) approach for an efficient shape representation and classification. This method makes use of Earth Movers Distance (EMD) as the measure in feature matching and shape retrieval process. The proposed approach has three major phases: Feature Extraction, Construction of hybrid spectrum knowledge base and Classification. In the first phase, feature extraction of the shape is done using pattern spectrum and local binary pattern method. In the second phase, the histograms of both pattern spectrum and local binary pattern are fused and stored in the knowledge base. In the third phase, the comparison and matching of the features, which are represented in the form of histograms, is done using Earth Movers Distance (EMD) as metric. The top-n shapes are retrieved for each query shape. The accuracy is tested by means of standard Bulls eye score method. The experiments are conducted on publicly available shape datasets like Kimia-99, Kimia-216 and MPEG-7. The comparative study is also provided with the well known approaches to exhibit the retrieval accuracy of the proposed approach.
Skeletons can be organized in the form of attribute-relation graphs (ARG) for the matching purpose. One of the examples for ARG is shock graphs introduced by @cite_9 @cite_11 . Shock Graph is an abstraction of skeleton of a shape onto a Directed Acyclic Graph (DAG). The skeleton points are labeled according to the radius function at each point. Shock graphs are constructed using the specialized grammar called Shock Grammar @cite_10 . In the skeleton, the branch points, end-points, and skeleton segments contain both geometrical and topological information. These primitives are referred as shocks @cite_6 @cite_23 . The concept of bone graph is an extension of shock graph. Bone graphs retain the non-ligature structures of the shock graph and are more stable @cite_12 . There are several algorithms which are proposed in order to have an efficient match of these skelet al graphs. @cite_23 proposed a matching approach for node-attributed trees which measures the edit distance between two graphs. Since this method involves complex edit operation, it suffers from high computational complexity.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_23", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2076777422", "2108611942", "2114766304", "2109779101", "2160829366", "2150559991" ], "abstract": [ "Abstract We earlier introduced an approach to categorical shape description based on the singularities (shocks) of curve evolution equations. The approach relates to many techniques in computer vision, such as Blum's grassfire transform, but since the motivation was abstract it is not clear that it should also relate to human perception. We now report that this shock-based computational model can account for recent psychophysical data collected by Burbeck and Pizer . In these experiments subjects were asked to estimate the local centers of stimuli consisting of rectangles with wiggles' (sides modulated by sinusoids). Since the experiments were motivated by their core' model, in which the scale of boundary detail is proportional to object width, we conclude that such properties are also implicit in shock-based shape descriptions. More generally, the results suggest that significance is a structural notion, not an image-based one, and that scale should be defined primarily in terms of relationships between abstract entities, not concrete pixels.", "This paper presents a novel recognition framework which is based on matching shock graphs of 2D shape outlines, where the distance between two shapes is defined to be the cost of the least action path deforming one shape to another. Three key ideas render the implementation of this framework practical. First, the shape space is partitioned by defining an equivalence class on shapes, where two shapes with the same shock graph topology are considered to be equivalent. Second, the space of deformations is discretized by defining all deformations with the same sequence of shock graph transitions as equivalent. Shock transitions are points along the deformation where the shock graph topology changes. Third, we employ a graph edit distance algorithm that searches in the space of all possible transition sequences and finds the globally optimal sequence in polynomial time. The effectiveness of the proposed technique in the presence of a variety of visual transformations including occlusion, articulation and deformation of parts, shadow and highlights, viewpoint variation, and boundary perturbations is demonstrated. Indexing into two separate databases of roughly 100 shapes results in accuracy for top three matches and for the next three matches.", "This paper presents a novel framework for the recognition of objects based on their silhouettes. The main idea is to measure the distance between two shapes as the minimum extent of deformation necessary for one shape to match the other. Since the space of deformations is very high-dimensional, three steps are taken to make the search practical: 1) define an equivalence class for shapes based on shock-graph topology, 2) define an equivalence class for deformation paths based on shock-graph transitions, and 3) avoid complexity-increasing deformation paths by moving toward shock-graph degeneracy. Despite these steps, which tremendously reduce the search requirement, there still remain numerous deformation paths to consider. To that end, we employ an edit-distance algorithm for shock graphs that finds the optimal deformation path in polynomial time. The proposed approach gives intuitive correspondences for a variety of shapes and is robust in the presence of a wide range of visual transformations. The recognition rates on two distinct databases of 99 and 216 shapes each indicate highly successful within category matches (100 percent in top three matches), which render the framework potentially usable in a range of shape-based recognition applications.", "We confront the theoretical and practical difficulties of computing a representation for two-dimensional shape, based on shocks or singularities that arise as the shape's boundary is deformed. First, we develop subpixel local detectors for finding and classifying shocks. Second, to show that shock patterns are not arbitrary but obey the rules of a grammar, and in addition satisfy specific topological and geometric constraints. Shock hypotheses that violate the grammar or are topologically or geometrically invalid are pruned to enforce global consistency. Survivors are organized into a hierarchical graph of shock groups computed in the reaction-diffusion space, where diffusion plays a role of regularization to determine the significance of each shock group. The shock groups can be functionally related to the object's parts, protrusions and bends, and the representation is suited to recognition: several examples illustrate its stability with rotations, scale changes, occlusion and movement of parts, even at very low resolutions.", "Medial descriptions, such as shock graphs, have gained significant momentum in the shape-based object recognition community due to their invariance to translation, rotation, scale and articulation and their ability to cope with moderate amounts of within-class deformation. While they attempt to decompose a shape into a set of parts, this decomposition can suffer from ligature-induced instability. In particular, the addition of even a small part can have a dramatic impact on the representation in the vicinity of its attachment. We present an algorithm for identifying and representing the ligature structure, and restoring the non-ligature structures that remain. This leads to a bone graph, a new medial shape abstraction that captures a more intuitive notion of an objectpsilas parts than a skeleton or a shock graph, and offers improved stability and within-class deformation invariance. We demonstrate these advantages by comparing the use of bone graphs to shock graphs in a set of view-based object recognition and pose estimation trials.", "We have been developing a theory for the generic representation of 2-D shape, where structural descriptions are derived from the shocks (singularities) of a curve evolution process, acting on bounding contours. We now apply the theory to the problem of shape matching. The shocks are organized into a directed, acyclic shock graph, and complexity is managed by attending to the most significant (central) shape components first. The space of all such graphs is highly structured and can be characterized by the rules of a shock graph grammar. The grammar permits a reduction of a shockgraph to a unique rooted shock tree. We introduce a novel tree matching algorithm which finds the best set of corresponding nodes between two shock trees in polynomial time. Using a diverse database of shapes, we demonstrate our system's performance under articulation, occlusion, and changes in viewpoint." ] }
1504.07082
2064988385
In this paper, we present a decision level fused local Morphological Pattern Spectrum (PS) and Local Binary Pattern (LBP) approach for an efficient shape representation and classification. This method makes use of Earth Movers Distance (EMD) as the measure in feature matching and shape retrieval process. The proposed approach has three major phases: Feature Extraction, Construction of hybrid spectrum knowledge base and Classification. In the first phase, feature extraction of the shape is done using pattern spectrum and local binary pattern method. In the second phase, the histograms of both pattern spectrum and local binary pattern are fused and stored in the knowledge base. In the third phase, the comparison and matching of the features, which are represented in the form of histograms, is done using Earth Movers Distance (EMD) as metric. The top-n shapes are retrieved for each query shape. The accuracy is tested by means of standard Bulls eye score method. The experiments are conducted on publicly available shape datasets like Kimia-99, Kimia-216 and MPEG-7. The comparative study is also provided with the well known approaches to exhibit the retrieval accuracy of the proposed approach.
The techniques which involve graph matching by finding the correspondence between nodes through the conversion of skeleton graphs to skeleton trees require heuristic rules to select the root node @cite_19 @cite_17 . The major drawback of this method is that a small change in the shape causes the root to change that results in significant change in the topology of the tree representation. Apart from this, the conversion from graph to a tree structure results in loss of significant structural information and hence leads to wrong match @cite_24 . In 2008, Bai and Latecki @cite_24 proposed a method based on the path similarity between the end points of the skeleton graphs. In this method, the geodesic paths between skeleton end points are obtained and are matched using Optimal Subsequence Bijection (OSB) method. Unlike other methods, this method does not convert the skeleton into a graph and matching the graphs,which is still an open problem. This approach addresses the problem of shapes having similar skeleton graphs, but different topological structures and having different skeletons with visually similar shapes. This approach works well even in the presence of articulation and contour deformation.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_17" ], "mid": [ "2111569598", "2030488300", "2118782734" ], "abstract": [ "This paper proposes a novel graph matching algorithm and applies it to shape recognition based on object silhouettes. The main idea is to match skeleton graphs by comparing the geodesic paths between skeleton endpoints. In contrast to typical tree or graph matching methods, we do not consider the topological graph structure. Our approach is motivated by the fact that visually similar skeleton graphs may have completely different topological structures. The proposed comparison of geodesic paths between endpoints of skeleton graphs yields correct matching results in such cases. The skeletons are pruned by contour partitioning with discrete. Curve evolution, which implies that the endpoints of skeleton branches correspond to visual parts of the objects. The experimental results demonstrate that our method is able to produce correct results in the presence of articulations, stretching, and contour deformations.", "Object recognition can be formulated as matching image features to model features. When recognition is exemplar-based, feature correspondence is one-to-one. However, segmentation errors, articulation, scale difference, and within-class deformation can yield image and model features which don't match one-to-one but rather many-to-many. Adopting a graph-based representation of a set of features, we present a matching algorithm that establishes many-to-many correspondences between the nodes of two noisy, vertex-labeled weighted graphs. Our approach reduces the problem of many-to-many matching of weighted graphs to that of many-to-many matching of weighted point sets in a normed vector space. This is accomplished by embedding the initial weighted graphs into a normed vector space with low distortion using a novel embedding technique based on a spherical encoding of graph structure. Many-to-many vector correspondences established by the Earth Mover's Distance framework are mapped back into many-to-many correspondences between graph nodes. Empirical evaluation of the algorithm on an extensive set of recognition trials, including a comparison with two competing graph matching approaches, demonstrates both the robustness and efficacy of the overall approach.", "It is well-known that the problem of matching two relational structures can be posed as an equivalent problem of finding a maximal clique in a (derived) \"association graph.\" However, it is not clear how to apply this approach to computer vision problems where the graphs are hierarchically organized, i.e., are trees, since maximal cliques are not constrained to preserve the partial order. We provide a solution to the problem of matching two trees by constructing the association graph using the graph-theoretic concept of connectivity. We prove that, in the new formulation, there is a one-to-one correspondence between maximal cliques and maximal subtree isomorphisms. This allows us to cast the tree matching problem as an indefinite quadratic program using the Motzkin-Straus theorem, and we use \"replicator\" dynamical systems developed in theoretical biology to solve it. Such continuous solutions to discrete problems are attractive because they can motivate analog and biological implementations. The framework is also extended to the matching of attributed trees by using weighted association graphs. We illustrate the power of the approach by matching articulated and deformed shapes described by shock trees." ] }
1504.07082
2064988385
In this paper, we present a decision level fused local Morphological Pattern Spectrum (PS) and Local Binary Pattern (LBP) approach for an efficient shape representation and classification. This method makes use of Earth Movers Distance (EMD) as the measure in feature matching and shape retrieval process. The proposed approach has three major phases: Feature Extraction, Construction of hybrid spectrum knowledge base and Classification. In the first phase, feature extraction of the shape is done using pattern spectrum and local binary pattern method. In the second phase, the histograms of both pattern spectrum and local binary pattern are fused and stored in the knowledge base. In the third phase, the comparison and matching of the features, which are represented in the form of histograms, is done using Earth Movers Distance (EMD) as metric. The top-n shapes are retrieved for each query shape. The accuracy is tested by means of standard Bulls eye score method. The experiments are conducted on publicly available shape datasets like Kimia-99, Kimia-216 and MPEG-7. The comparative study is also provided with the well known approaches to exhibit the retrieval accuracy of the proposed approach.
@cite_26 proposed a technique called skelet al shape abstraction from examples. A many to many correspondence between the object parts is used to match the object skeletons. This method is invariant to shape articulation and appears to be more suitable for object detection in edge images. However, the tree abstractions are not specific enough to prevent hallucinating of target object in clutter. The dissimilarity value between the object trees is computed by performing many to many matches between the vertices of the trees. Earth Movers Distance (EMD), under L1 norm, is used as a metric for the matching process.
{ "cite_N": [ "@cite_26" ], "mid": [ "2114549861" ], "abstract": [ "Learning a class prototype from a set of exemplars is an important challenge facing researchers in object categorization. Although the problem is receiving growing interest, most approaches assume a one-to-one correspondence among local features, restricting their ability to learn true abstractions of a shape. In this paper, we present a new technique for learning an abstract shape prototype from a set of exemplars whose features are in many-to-many correspondence. Focusing on the domain of 2D shape, we represent a silhouette as a medial axis graph whose nodes correspond to \"partsrdquo defined by medial branches and whose edges connect adjacent parts. Given a pair of medial axis graphs, we establish a many-to-many correspondence between their nodes to find correspondences among articulating parts. Based on these correspondences, we recover the abstracted medial axis graph along with the positional and radial attributes associated with its nodes. We evaluate the abstracted prototypes in the context of a recognition task." ] }
1504.07082
2064988385
In this paper, we present a decision level fused local Morphological Pattern Spectrum (PS) and Local Binary Pattern (LBP) approach for an efficient shape representation and classification. This method makes use of Earth Movers Distance (EMD) as the measure in feature matching and shape retrieval process. The proposed approach has three major phases: Feature Extraction, Construction of hybrid spectrum knowledge base and Classification. In the first phase, feature extraction of the shape is done using pattern spectrum and local binary pattern method. In the second phase, the histograms of both pattern spectrum and local binary pattern are fused and stored in the knowledge base. In the third phase, the comparison and matching of the features, which are represented in the form of histograms, is done using Earth Movers Distance (EMD) as metric. The top-n shapes are retrieved for each query shape. The accuracy is tested by means of standard Bulls eye score method. The experiments are conducted on publicly available shape datasets like Kimia-99, Kimia-216 and MPEG-7. The comparative study is also provided with the well known approaches to exhibit the retrieval accuracy of the proposed approach.
Shu and Wu @cite_22 proposed contour points distribution histogram (CPDH) as the shape descriptor. Though it is not the best one compared to the other state-of-art approaches in terms of retrieval rate, it is relatively simple and has low time complexity. @cite_3 proposed a shape descriptor which represents the object contour by a fixed number of sample point. Each sample point is associated with a height function. This method is capable of handling nonlinear deformation of the objects. @cite_21 proposed a method which finds the common structure in a cluster of object skeleton graph. This method outperforms other methods and also suitable for large datasets. However, the time complexity of this method is high due to agglomerative hierarchical clustering.
{ "cite_N": [ "@cite_21", "@cite_22", "@cite_3" ], "mid": [ "", "1980807298", "2152106040" ], "abstract": [ "", "We suggest a novel shape contour descriptor for shape matching and retrieval. The new descriptor is called contour points distribution histogram (CPDH) which is based on the distribution of points on object contour under polar coordinates. CPDH not only conforms to the human visual perception but also the computational complexity of it is low. Invariant to scale and translation are the intrinsic properties of CPDH and the problem of the invariant to rotation can be partially resolved in the matching process. After the CPDHs of images are generated, the similarity value of the images is obtained by EMD (Earth Mover's Distance) metric. In order to make the EMD method used effectively for the matching of CPDHs, we also develop a new approach to the ground distance used in the EMD metric under polar coordinates. Experimental results of image retrieval demonstrate that the novel descriptor has a strong capability in handling a variety of shapes.", "We propose a novel shape descriptor for matching and recognizing 2D object silhouettes. The contour of each object is represented by a fixed number of sample points. For each sample point, a height function is defined based on the distances of the other sample points to its tangent line. One compact and robust shape descriptor is obtained by smoothing the height functions. The proposed descriptor is not only invariant to geometric transformations such as translation, rotation and scaling but also insensitive to nonlinear deformations due to noise and occlusion. In the matching stage, the Dynamic Programming (DP) algorithm is employed to find out the optimal correspondence between sample points of every two shapes. The height function provides an excellent discriminative power, which is demonstrated by excellent retrieval performances on several popular shape benchmarks, including MPEG-7 data set, Kimia's data set and ETH-80 data set." ] }
1504.06678
2951208315
The long short-term memory (LSTM) neural network is capable of processing complex sequential information since it utilizes special gating schemes for learning representations from long input sequences. It has the potential to model any sequential time-series data, where the current hidden state has to be considered in the context of the past hidden states. This property makes LSTM an ideal choice to learn the complex dynamics of various actions. Unfortunately, the conventional LSTMs do not consider the impact of spatio-temporal dynamics corresponding to the given salient motion patterns, when they gate the information that ought to be memorized through time. To address this problem, we propose a differential gating scheme for the LSTM neural network, which emphasizes on the change in information gain caused by the salient motions between the successive frames. This change in information gain is quantified by Derivative of States (DoS), and thus the proposed LSTM model is termed as differential Recurrent Neural Network (dRNN). We demonstrate the effectiveness of the proposed model by automatically recognizing actions from the real-world 2D and 3D human action datasets. Our study is one of the first works towards demonstrating the potential of learning complex time-series representations via high-order derivatives of states.
Action recognition has been a long-standing research problem in computer vision and pattern recognition community, which aims to enable a computer to automatically understand the activities performed by people interacting with the surrounding environment and with each other @cite_19 . This is a challenging problem due to the huge intra-class variance of actions performed by different actors at various speeds, in diverse environments (e.g., camera angles, lighting conditions, and cluttered background).
{ "cite_N": [ "@cite_19" ], "mid": [ "2106996050" ], "abstract": [ "Vision-based human action recognition is the process of labeling image sequences with action labels. Robust solutions to this problem have applications in domains such as visual surveillance, video retrieval and human-computer interaction. The task is challenging due to variations in motion performance, recording settings and inter-personal differences. In this survey, we explicitly address these challenges. We provide a detailed overview of current advances in the field. Image representations and the subsequent classification process are discussed separately to focus on the novelties of recent research. Moreover, we discuss limitations of the state of the art and outline promising directions of research." ] }
1504.06678
2951208315
The long short-term memory (LSTM) neural network is capable of processing complex sequential information since it utilizes special gating schemes for learning representations from long input sequences. It has the potential to model any sequential time-series data, where the current hidden state has to be considered in the context of the past hidden states. This property makes LSTM an ideal choice to learn the complex dynamics of various actions. Unfortunately, the conventional LSTMs do not consider the impact of spatio-temporal dynamics corresponding to the given salient motion patterns, when they gate the information that ought to be memorized through time. To address this problem, we propose a differential gating scheme for the LSTM neural network, which emphasizes on the change in information gain caused by the salient motions between the successive frames. This change in information gain is quantified by Derivative of States (DoS), and thus the proposed LSTM model is termed as differential Recurrent Neural Network (dRNN). We demonstrate the effectiveness of the proposed model by automatically recognizing actions from the real-world 2D and 3D human action datasets. Our study is one of the first works towards demonstrating the potential of learning complex time-series representations via high-order derivatives of states.
To address this problem, many robust spatio-temporal representations have been constructed. For example, HOG3D @cite_23 uses the histogram of 3D gradient orientations to represent the motion structure over the frame sequences; 3D-SIFT @cite_2 extends the popular SIFT descriptor to characterize the scale-invariant spatio-temporal structure for 3D video volume; actionlet ensemble @cite_9 utilizes a robust approach to model the discriminative features from 3D positions of the tracked joints captured by depth cameras.
{ "cite_N": [ "@cite_9", "@cite_23", "@cite_2" ], "mid": [ "2143267104", "2024868105", "2108333036" ], "abstract": [ "Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms.", "In this work, we present a novel local descriptor for video sequences. The proposed descriptor is based on histograms of oriented 3D spatio-temporal gradients. Our contribution is four-fold. (i) To compute 3D gradients for arbitrary scales, we develop a memory-efficient algorithm based on integral videos. (ii) We propose a generic 3D orientation quantization which is based on regular polyhedrons. (iii) We perform an in-depth evaluation of all descriptor parameters and optimize them for action recognition. (iv) We apply our descriptor to various action datasets (KTH, Weizmann, Hollywood) and show that we outperform the state-of-the-art.", "In this paper we introduce a 3-dimensional (3D) SIFT descriptor for video or 3D imagery such as MRI data. We also show how this new descriptor is able to better represent the 3D nature of video data in the application of action recognition. This paper will show how 3D SIFT is able to outperform previously used description methods in an elegant and efficient manner. We use a bag of words approach to represent videos, and present a method to discover relationships between spatio-temporal words in order to better describe the video data." ] }
1504.06678
2951208315
The long short-term memory (LSTM) neural network is capable of processing complex sequential information since it utilizes special gating schemes for learning representations from long input sequences. It has the potential to model any sequential time-series data, where the current hidden state has to be considered in the context of the past hidden states. This property makes LSTM an ideal choice to learn the complex dynamics of various actions. Unfortunately, the conventional LSTMs do not consider the impact of spatio-temporal dynamics corresponding to the given salient motion patterns, when they gate the information that ought to be memorized through time. To address this problem, we propose a differential gating scheme for the LSTM neural network, which emphasizes on the change in information gain caused by the salient motions between the successive frames. This change in information gain is quantified by Derivative of States (DoS), and thus the proposed LSTM model is termed as differential Recurrent Neural Network (dRNN). We demonstrate the effectiveness of the proposed model by automatically recognizing actions from the real-world 2D and 3D human action datasets. Our study is one of the first works towards demonstrating the potential of learning complex time-series representations via high-order derivatives of states.
Meanwhile, the existing approaches combine deep neural networks with spatio-temporal descriptors, achieving competitive performance. For example, in @cite_20 , a LSTM model takes a sequence of Harris3D and 3DCNN descriptors extracted from each frame as input, and the result on KTH dataset has shown the state-of-the-art performance @cite_20 .
{ "cite_N": [ "@cite_20" ], "mid": [ "28988658" ], "abstract": [ "We propose in this paper a fully automated deep model, which learns to classify human actions without using any prior knowledge. The first step of our scheme, based on the extension of Convolutional Neural Networks to 3D, automatically learns spatio-temporal features. A Recurrent Neural Network is then trained to classify each sequence considering the temporal evolution of the learned features for each timestep. Experimental results on the KTH dataset show that the proposed approach outperforms existing deep models, and gives comparable results with the best related works." ] }
1504.06741
795219308
Software engineers who collaborate to develop soft- ware in teams often have to manually merge changes they made to a module (e.g. a class), because the change conflicts with one that has just been made by another engineer to the same or another module (e.g. a supplier class). This is due to the fact that engineers edit code separately, and loosely coordinate their work via a source control or a software configuration management system (SCM). This work proposes to eliminate almost all the need to manually merge a recent change, by proposing a Collaborative Real Time Coding approach. In this approach, valid changes to the code are seen by others in real time, but intermediate changes (that cause the code not to compile) result in blocking other engineers from making changes related to the entity (e.g. method) being modified, while allowing them to work on most of the system. The subject of collaborative real time editing systems has been studied for the past 20 years. Research in this field has mostly concentrated on collaborative textual and graphical editing. In this work we address the challenges involved in designing a collaborative real time coding system, as well as present the major differences when compared to collaborative editing of plain text. We then present a prototype plug in for the Eclipse Integrated Development Environment ( IDE) that allows for a collaborative coding to take place. Index Terms—Collaborative Real Time Coding; Integrated Development Environment; Software Configuration Management Tools; This paper was written in 2011.
Ellis and Gibbs @cite_6 , were the first to suggest the operational transformation (OT) framework for concurrency management in a distributed groupware systems. The suggested framework addressed the difficulties entailed in having a real time, highly reactive, concurrent editing environment for plain text. The basic idea of OT is to transform arriving operations against independent @cite_8 operations from the log (where all previously executed operations are saved) in such a manner, that the execution of the same set of properly transformed independent operations in different orders produce identical document states, ensuring convergence @cite_12 . A major issue with the correctness of the algorithm presented was what is now commonly referenced as the dOPT puzzle @cite_12 . The dOPT puzzle scenario describes a situation where clients would diverge by more than one step in their state, breaking the correctness of the algorithm.
{ "cite_N": [ "@cite_12", "@cite_6", "@cite_8" ], "mid": [ "1996958808", "2145220267", "2151943351" ], "abstract": [ "Rd-time group editors dow a group of users to view and edit, the same document at the same time horn geograpbicdy di. ersed sites connected by communication networks. Consistency maintenance is one of the most si@cant &alwiges in the design and implementation of thwe types of systems. R=earch on rd-time group editors in the past decade has invented au inuolative tetique for consistency maintenance, ded operational transformation This paper presents an integrative review of the evolution of operational tra=formation techniques, with the go of identifying the major is-m s, dgotiths, achievements, and remaining Mlenges. In addition, this paper contribut= a new optimized generic operational transformation control algorithm. Ke vords Consistency maint enauce, operational transformation, convergence, CauS*ty pras ation, intention pre tion, group e&tors, groupware, distributed computing.", "Groupware systems are computer-based systems that support two or more users engaged in a common task, and that provide an interface to a shared environment. These systems frequently require fine-granularity sharing of data and fast response times. This paper distinguishes real-time groupware systems from other multi-user systems and discusses their concurrency control requirements. An algorithm for concurrency control in real-time groupware systems is then presented. The advantages of this algorithm are its simplicity of use and its responsiveness: users can operate directly on the data without obtaining locks. The algorithm must know some semantics of the operations. However the algorithm's overall structure is independent of the semantic information, allowing the algorithm to be adapted to many situations. An example application of the algorithm to group text editing is given, along with a sketch of its proof of correctness in this particular case. We note that the behavior desired in many of these systems is non-serializable.", "Real-time cooperative editing systems allow multiple users to view and edit the same text graphic image multimedia document at the same time for multiple sites connected by communication networks. Consistency maintenance is one of the most significant challenges in designing and implementing real-time cooperative editing systems. In this article, a consistency model, with properties of convergence, causality preservation, and intention preservation, is proposed as a framework for consistency maintenance in real-time cooperative editing systems. Moreover, an integrated set of schemes and algorithms, which support the proposed consistency model, are devised and discussed in detail. In particular, we have contributed (1) a novel generic operation transformation control algorithm for achieving intention preservation in combination with schemes for achieving convergence and causality preservation and (2) a pair of reversible inclusion and exclusion transformation algorithms for stringwise operations for text editing. An Internet-based prototype system has been built to test the feasibility of the proposed schemes and algorithms" ] }
1504.06741
795219308
Software engineers who collaborate to develop soft- ware in teams often have to manually merge changes they made to a module (e.g. a class), because the change conflicts with one that has just been made by another engineer to the same or another module (e.g. a supplier class). This is due to the fact that engineers edit code separately, and loosely coordinate their work via a source control or a software configuration management system (SCM). This work proposes to eliminate almost all the need to manually merge a recent change, by proposing a Collaborative Real Time Coding approach. In this approach, valid changes to the code are seen by others in real time, but intermediate changes (that cause the code not to compile) result in blocking other engineers from making changes related to the entity (e.g. method) being modified, while allowing them to work on most of the system. The subject of collaborative real time editing systems has been studied for the past 20 years. Research in this field has mostly concentrated on collaborative textual and graphical editing. In this work we address the challenges involved in designing a collaborative real time coding system, as well as present the major differences when compared to collaborative editing of plain text. We then present a prototype plug in for the Eclipse Integrated Development Environment ( IDE) that allows for a collaborative coding to take place. Index Terms—Collaborative Real Time Coding; Integrated Development Environment; Software Configuration Management Tools; This paper was written in 2011.
Unlike most other collaborative editing systems, Jupiter @cite_9 implemented a centralized architecture. A central sever was responsible to mediate between any two clients, so that at any given time only 2-way communication could take place (the central server and some client). This server also held the responsibility of propagating changes to all other clients. When the cental server received an operation request, it was transformed, if necessary, and applied to the local document state, followed by propagation to other clients. This centralized manner of communication inherently relieved Jupiter of both the dOPT puzzle and precedence issues. The fact that at any given time only a 2-way synchronization took place alleviated the issue of preserving precedence between operations.
{ "cite_N": [ "@cite_9" ], "mid": [ "1989814541" ], "abstract": [ "Jupiter is a multi-user, multimedia virtual world intended to support long-term remote collaboration. In particular, it supports shared documents, shared tools, and, optionally, live audio video communication. Users who program can, with only moderate effort, create new kinds of shared tools using a high-level windowing toolkit; the toolkit provides transparent support for fully-shared widgets by default. This paper describes the low-level communications facilities used by the implementation of the toolkit to enable that support. The state of the Jupiter virtual world, including application code written by users, is stored and (for code) executed in a central server shared by all of the users. This architecture, along with our desire to support multiple client platforms and high-latency networks, led us to a design in which the server and clients communicate in terms of high-level widgets and user events. As in other groupware toolkits, we need a concurrency-control algorithm to maintain common values for all instances of the shared widgets. Our algorithm is derived from a fully distributed, optimistic algorithm developed by Ellis and Gibbs [12]. Jupiter’s centralized architecture allows us to substantially simplify their algorithm. This combination of a centralized architecture and optimistic concurrency control gives us both easy serializability of concurrent update streams and fast response to user actions. The algorithm relies on operation transformations to fix up conflicting messages. The best transformations are not always obvious, though, and several conflicting concerns are involved in choosing them. We present our experience with choosing transformations for our widget set, which includes a text editor, a graphical drawing widget, and a number of simpler widgets such as buttons and sliders." ] }
1504.06993
2950652457
Lossy compression introduces complex compression artifacts, particularly the blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restores sharpened images that are accompanied with ringing effects. Inspired by the deep convolutional networks (DCN) on super-resolution, we formulate a compact and efficient network for seamless attenuation of different compression artifacts. We also demonstrate that a deeper model can be effectively trained with the features learned in a shallow network. Following a similar "easy to hard" idea, we systematically investigate several practical transfer settings and show the effectiveness of transfer learning in low-level vision problems. Our method shows superior performance than the state-of-the-arts both on the benchmark datasets and the real-world use case (i.e. Twitter). In addition, we show that our method can be applied as pre-processing to facilitate other low-level vision routines when they take compressed images as input.
Super-Resolution Convolutional Neural Network (SRCNN) @cite_20 is closely related to our work. In the study, independent steps in the sparse-coding-based method are formulated as different convolutional layers and optimized in a unified network. It shows the potential of deep model in low-level vision problems like super-resolution. However, the model of compression is different from super-resolution in that it consists of different kinds of artifacts. Designing a deep model for compression restoration requires a deep understanding into the different artifacts. We show that directly applying the SRCNN architecture for compression restoration will result in undesired noisy patterns in the reconstructed image.
{ "cite_N": [ "@cite_20" ], "mid": [ "54257720" ], "abstract": [ "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage." ] }
1504.06993
2950652457
Lossy compression introduces complex compression artifacts, particularly the blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restores sharpened images that are accompanied with ringing effects. Inspired by the deep convolutional networks (DCN) on super-resolution, we formulate a compact and efficient network for seamless attenuation of different compression artifacts. We also demonstrate that a deeper model can be effectively trained with the features learned in a shallow network. Following a similar "easy to hard" idea, we systematically investigate several practical transfer settings and show the effectiveness of transfer learning in low-level vision problems. Our method shows superior performance than the state-of-the-arts both on the benchmark datasets and the real-world use case (i.e. Twitter). In addition, we show that our method can be applied as pre-processing to facilitate other low-level vision routines when they take compressed images as input.
Transfer learning in deep neural networks becomes popular since the success of deep learning in image classification @cite_4 . The features learned from the ImageNet show good generalization ability @cite_15 and become a powerful tool for several high-level vision problems, such as Pascal VOC image classification @cite_26 and object detection @cite_8 @cite_27 . Yosinski al @cite_31 have also tried to quantify the degree to which a particular layer is general or specific. Overall, transfer learning has been systematically investigated in high-level vision problems, but not in low-level vision tasks. In this study, we explore several transfer settings on compression artifacts reduction and show the effectiveness of transfer learning in low-level vision problems.
{ "cite_N": [ "@cite_31", "@cite_26", "@cite_4", "@cite_8", "@cite_27", "@cite_15" ], "mid": [ "2949667497", "2161381512", "", "2102605133", "1487583988", "2952186574" ], "abstract": [ "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.", "Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.", "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets." ] }
1504.07066
2950451393
Consider the problem in which n jobs that are classified into k types are to be scheduled on m identical machines without preemption. A machine requires a proper setup taking s time units before processing jobs of a given type. The objective is to minimize the makespan of the resulting schedule. We design and analyze an approximation algorithm that runs in time polynomial in n, m and k and computes a solution with an approximation factor that can be made arbitrarily close to 3 2.
The scheduling problem considered in this paper is a generalization of the classical problem of scheduling jobs on identical machines without preemption and in which setup times are equal to @math . This problem has been extensively studied in theoretical research and PTASs with runtimes that are linear in the number @math of jobs are known for objective functions such as minimizing (maximizing) the maximum (minimum) completion time or sum of completion times @cite_7 @cite_10 . If the number @math of machines is constant, even FPTASs exist @cite_2 .
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_2" ], "mid": [ "2093979815", "1977276352", "1992734168" ], "abstract": [ "The problem of scheduling a set of n jobs on m identical machines so as to minimize the makespan time is perhaps the most well-studied problem in the theory of approximation algorithms for NP-hard optimization problems. In this paper the strongest possible type of result for this problem, a polynomial approximation scheme, is presented. More precisely, for each e, an algorithm that runs in time O (( n e) 1 e 2 ) and has relative error at most e is given. In addition, more practical algorithms for e = 1 5 + 2 - k and e = 1 6 + 2 - k , which have running times O ( n ( k + log n )) and O ( n ( km 4 + log n )) are presented. The techniques of analysis used in proving these results are extremely simple, especially in comparison with the baroque weighting techniques used previously. The scheme is based on a new approach to constructing approximation algorithms, which is called dual approximation algorithms, where the aim is to find superoptimal, but infeasible, solutions, and the performance is measured by the degree of infeasibility allowed. This notion should find wide applicability in its own right and should be considered for any optimization problem where traditional approximation algorithms have been particularly elusive.", "We discuss scheduling problems with m identical machines and n jobs where each job has to be assigned to some machine. The goal is to optimize objective functions that solely depend on the machine completion times. As a main result, we identify some conditions on the objective function, under which the resulting scheduling problems possess a polynomial time approximation scheme. Our result contains, generalizes, improves, simplifies, and unifies many other results in this area in a natural way.", "Exact and approximate algorithms are presented for scheduling independent tasks in a multiprocessor environment in which the processors have different speeds. Dynamic programming type algorithms are presented which minimize finish time and weighted mean flow time on two processors. The generalization to m processors is direct. These algorithms have a worst-case complexity which is exponential in the number of tasks. Therefore approximation algorithms of low polynomial complexity are also obtained for the above problems. These algorithms are guaranteed to obtain solutions that are close to the optimal. For the case of minimizing mean flow time on m -processors an algorithm is given whose complexity is O( n log mn )." ] }
1504.07066
2950451393
Consider the problem in which n jobs that are classified into k types are to be scheduled on m identical machines without preemption. A machine requires a proper setup taking s time units before processing jobs of a given type. The objective is to minimize the makespan of the resulting schedule. We design and analyze an approximation algorithm that runs in time polynomial in n, m and k and computes a solution with an approximation factor that can be made arbitrarily close to 3 2.
The dual problem of our scheduling problem was studied by Xavier and Miyazawa and is known as class-constrained shelf bin packing. For a constant number of classes, an asymptotic PTAS is known for this problem @cite_8 as well as a dual approximation scheme @cite_0 , i.e. a PTAS for our problem if @math is constant.
{ "cite_N": [ "@cite_0", "@cite_8" ], "mid": [ "2160306680", "2158165134" ], "abstract": [ "In this paper we present a dual approximation scheme for the class constrained shelf bin packing problem. In this problem, we are given bins of capacity 1 , and n items of Q different classes, each item e with class c e and size s e . The problem is to pack the items into bins, such that two items of different classes packed in a same bin must be in different shelves. Items in a same shelf are packed consecutively. Moreover, items in consecutive shelves must be separated by shelf divisors of size d . In a shelf bin packing problem, we have to obtain a shelf packing such that the total size of items and shelf divisors in any bin is at most 1. A dual approximation scheme must obtain a shelf packing of all items into N bins, such that, the total size of all items and shelf divisors packed in any bin is at most 1 + e for a given e > 0 and N is the number of bins used in an optimum shelf bin packing problem. Shelf divisors are used to avoid contact between items of different classes and can hold a set of items until a maximum given weight. We also present a dual approximation scheme for the class constrained bin packing problem. In this problem, there is no use of shelf divisors, but each bin uses at most C different classes.", "Given bins of size B, non-negative values d and @D, and a list L of items, each item [email protected]?L with size s\"e and class c\"e, we define a shelf as a subset of items packed inside a bin with total item sizes at most @D such that all items in this shelf have the same class. Two subsequent shelves must be separated by a shelf division of size d. The size of a shelf is the total size of its items plus the size of the shelf division. The class constrained shelf bin packing problem (CCSBP) is to pack the items of L into the minimum number of bins, such that the items are divided into shelves and the total size of the shelves in a bin is at most B. We present hybrid algorithms based on the First Fit (Decreasing) and Best Fit (Decreasing) algorithms, and an APTAS for the problem CCSBP when the number of different classes is bounded by a constant C." ] }
1504.07066
2950451393
Consider the problem in which n jobs that are classified into k types are to be scheduled on m identical machines without preemption. A machine requires a proper setup taking s time units before processing jobs of a given type. The objective is to minimize the makespan of the resulting schedule. We design and analyze an approximation algorithm that runs in time polynomial in n, m and k and computes a solution with an approximation factor that can be made arbitrarily close to 3 2.
Very recently, @cite_5 studied the problem of scheduling splittable jobs on unrelated machines. Here, unrelated refers to the fact that each job may have a different processing time on each of the machines. In their model, jobs may be split and each part might be assigned to a different machine but requires a setup before being processed. For this problem and the objective of minimizing the makespan they show their algorithm to have an approximation factor of at most @math , where @math is the golden ratio.
{ "cite_N": [ "@cite_5" ], "mid": [ "2769189507" ], "abstract": [ "We study a natural generalization of the problem of minimizing makespan on unrelated machines in which jobs may be split into parts. The different parts of a job can be (simultaneously) processed on different machines, but each part requires a setup time before it can be processed. First we show that a natural adaptation of the seminal approximation algorithm for unrelated machine scheduling [11] yields a 3-approximation algorithm, equal to the integrality gap of the corresponding LP relaxation. Through a stronger LP relaxation, obtained by applying a lift-and-project procedure, we are able to improve both the integrality gap and the implied approximation factor to 1 + φ, where φ ≈ 1.618 is the golden ratio. This ratio decreases to 2 in the restricted assignment setting, matching the result for the classic version. Interestingly, we show that our problem cannot be approximated within a factor better than e e-1 ≈ 1.582 (unless P = NP). This provides some evidence that it is harder than the classic version, which is only known to be inapproximable within a factor 1.5 - e. Since our 1 + φ bound remains tight when considering the seemingly stronger machine configuration LP, we propose a new job based configuration LP that has an infinite number of variables, one for each possible way a job may be split and processed on the machines. Using convex duality we show that this infinite LP has a finite representation and can be solved in polynomial time to any accuracy, rendering it a promising relaxation for obtaining better algorithms. © 2014 Springer International Publishing Switzerland." ] }
1504.07066
2950451393
Consider the problem in which n jobs that are classified into k types are to be scheduled on m identical machines without preemption. A machine requires a proper setup taking s time units before processing jobs of a given type. The objective is to minimize the makespan of the resulting schedule. We design and analyze an approximation algorithm that runs in time polynomial in n, m and k and computes a solution with an approximation factor that can be made arbitrarily close to 3 2.
In @cite_3 , an online variant of scheduling with setup times is considered. The authors propose a @math -competitive online algorithm for minimizing the maximum flow time if jobs arrive over time at one single machine.
{ "cite_N": [ "@cite_3" ], "mid": [ "2016555341" ], "abstract": [ "We address the problem of sequential single machine scheduling of jobs with release times, where jobs are classified into types, and the machine must be properly configured to handle jobs of a given type. The objective is to minimize the maximum flow time (time from release until completion) of any job. We consider this problem under the assumptions of sequence independent set-up times and item availability with the objective of minimizing the maximum flow time. We present an online algorithm that is O(1)-competitive, that is, always gets within a constant factor of optimal. We also show that exact offline optimization of maximum flow time is NP-hard." ] }
1504.06650
1755456395
This paper describes an approach for automatic construction of dictionaries for Named Entity Recognition (NER) using large amounts of unlabeled data and a few seed examples. We use Canonical Correlation Analysis (CCA) to obtain lower dimensional embeddings (representations) for candidate phrases and classify these phrases using a small number of labeled examples. Our method achieves 16.5 and 11.3 F-1 score improvement over co-training on disease and virus NER respectively. We also show that by adding candidate phrase embeddings as features in a sequence tagger gives better performance compared to using word embeddings.
Previously, introduced a multi-view, semi-supervised algorithm based on co-training @cite_8 for collecting names of people, organizations and locations. This algorithm makes a strong independence assumption about the data and employs many heuristics to greedily optimize an objective function. This greedy approach also introduces new parameters that are often difficult to tune.
{ "cite_N": [ "@cite_8" ], "mid": [ "2048679005" ], "abstract": [ "We consider the problem of using a large unlabeled sample to boost performance of a learning algorit,hrn when only a small set of labeled examples is available. In particular, we consider a problem setting motivated by the task of learning to classify web pages, in which the description of each example can be partitioned into two distinct views. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks t,hat point to that page. We assume that either view of the example would be sufficient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment, a much smaller set of labeled examples. Specifically, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm’s predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to significant improvement of hypotheses in practice. *This research was supported in part by the DARPA HPKB program under contract F30602-97-1-0215 and by NSF National Young investigator grant CCR-9357793. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. TO copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and or a fee. COLT 98 Madison WI USA Copyright ACM 1998 l-58113-057--0 98 7... 5.00 92 Tom Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 mitchell+@cs.cmu.edu" ] }
1504.06650
1755456395
This paper describes an approach for automatic construction of dictionaries for Named Entity Recognition (NER) using large amounts of unlabeled data and a few seed examples. We use Canonical Correlation Analysis (CCA) to obtain lower dimensional embeddings (representations) for candidate phrases and classify these phrases using a small number of labeled examples. Our method achieves 16.5 and 11.3 F-1 score improvement over co-training on disease and virus NER respectively. We also show that by adding candidate phrase embeddings as features in a sequence tagger gives better performance compared to using word embeddings.
In other works such as and external structured resources like Wikipedia have been used to construct dictionaries. Even though these methods are fairly successful they suffer from a number of drawbacks especially in the biomedical domain. The main drawback of these approaches is that it is very difficult to accurately disambiguate ambiguous entities especially when the entities are abbreviations @cite_9 . For example, is the abbreviation for the disease and the disambiguation page for in Wikipedia associates it to more than 50 categories since can be expanded to , , and so on, each of it belonging to a different category. Due to the rapid growth of Wikipedia, the number of entities that have disambiguation pages is growing fast and it is increasingly difficult to retrieve the article we want. Also, it is tough to understand these approaches from a theoretical standpoint.
{ "cite_N": [ "@cite_9" ], "mid": [ "2141099517" ], "abstract": [ "Models for many natural language tasks benefit from the flexibility to use overlapping, non-independent features. For example, the need for labeled data can be drastically reduced by taking advantage of domain knowledge in the form of word lists, part-of-speech tags, character n-grams, and capitalization patterns. While it is difficult to capture such inter-dependent features with a generative probabilistic model, conditionally-trained models, such as conditional maximum entropy models, handle them well. There has been significant work with such models for greedy sequence modeling in NLP (Ratnaparkhi, 1996; , 1998)." ] }
1504.06650
1755456395
This paper describes an approach for automatic construction of dictionaries for Named Entity Recognition (NER) using large amounts of unlabeled data and a few seed examples. We use Canonical Correlation Analysis (CCA) to obtain lower dimensional embeddings (representations) for candidate phrases and classify these phrases using a small number of labeled examples. Our method achieves 16.5 and 11.3 F-1 score improvement over co-training on disease and virus NER respectively. We also show that by adding candidate phrase embeddings as features in a sequence tagger gives better performance compared to using word embeddings.
used CCA to learn word embeddings and added them as features in a sequence tagger. They show that CCA learns better word embeddings than CW embeddings @cite_3 , Hierarchical log-linear (HLBL) embeddings @cite_5 and embeddings learned from many other techniques for NER and chunking. Unlike PCA, a widely used dimensionality reduction technique, CCA is invariant to linear transformations of the data. Our approach is motivated by the theoretical result in which is developed in the co-training setting. We directly use the CCA embeddings to predict the label of a data point instead of using them as features in a sequence tagger. Also, we learn CCA embeddings for candidate phrases instead of all words in the vocabulary since named entities often contain more than one word. learn a multi-class SVM using the CCA word embeddings to predict the POS tag of a word type. We extend this technique to NER by learning a binary SVM using the CCA embeddings of a high recall, low precision list of candidate phrases to predict whether a candidate phrase is a named entity or not.
{ "cite_N": [ "@cite_5", "@cite_3" ], "mid": [ "2091812280", "2117130368" ], "abstract": [ "The supremacy of n-gram models in statistical language modelling has recently been challenged by parametric models that use distributed representations to counteract the difficulties caused by data sparsity. We propose three new probabilistic language models that define the distribution of the next word in a sequence given several preceding words by using distributed representations of those words. We show how real-valued distributed representations for words can be learned at the same time as learning a large set of stochastic binary hidden features that are used to predict the distributed representation of the next word from previous distributed representations. Adding connections from the previous states of the binary hidden features improves performance as does adding direct connections between the real-valued distributed representations. One of our models significantly outperforms the very best n-gram models.", "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance." ] }
1504.06654
2949364118
There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination and embedding learning, by non-parametrically estimating the number of senses per word type, and by its efficiency and scalability. We present new state-of-the-art results in the word similarity in context task and demonstrate its scalability by training with one machine on a corpus of nearly 1 billion tokens in less than 6 hours.
Word vector representations or embeddings have been used in various NLP tasks such as named entity recognition @cite_13 @cite_17 @cite_2 , dependency parsing @cite_23 , chunking @cite_2 @cite_0 , sentiment analysis @cite_14 , paraphrase detection @cite_24 and learning representations of paragraphs and documents @cite_1 . The word clusters obtained from Brown clustering @cite_19 have similarly been used as features in named entity recognition @cite_16 @cite_7 and dependency parsing @cite_8 , among other tasks.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_1", "@cite_0", "@cite_24", "@cite_19", "@cite_23", "@cite_2", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "", "", "", "2949547296", "2129625650", "2103305545", "", "", "", "", "1755456395", "1570587036" ], "abstract": [ "", "", "", "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.", "Recently, there has been substantial interest in using large amounts of unlabeled data to learn word representations which can then be used as features in supervised classifiers for NLP tasks. However, most current approaches are slow to train, do not model the context of the word, and lack theoretical grounding. In this paper, we present a new learning method, Low Rank Multi-View Learning (LR-MVL) which uses a fast spectral method to estimate low dimensional context-specific word representations from unlabeled data. These representation features can then be used with any supervised learner. LR-MVL is extremely fast, gives guaranteed convergence to a global optimum, is theoretically elegant, and achieves state-of-the-art performance on named entity recognition (NER) and chunking problems.", "Paraphrase detection is the task of examining two sentences and determining whether they have the same meaning. In order to obtain high accuracy on this task, thorough syntactic and semantic analysis of the two statements is needed. We introduce a method for paraphrase detection based on recursive autoencoders (RAE). Our unsupervised RAEs are based on a novel unfolding objective and learn feature vectors for phrases in syntactic trees. These features are used to measure the word- and phrase-wise similarity between two sentences. Since sentences may be of arbitrary length, the resulting matrix of similarity measures is of variable size. We introduce a novel dynamic pooling layer which computes a fixed-sized representation from the variable-sized matrices. The pooled representation is then used as input to a classifier. Our method outperforms other state-of-the-art approaches on the challenging MSRP paraphrase corpus.", "", "", "", "", "This paper describes an approach for automatic construction of dictionaries for Named Entity Recognition (NER) using large amounts of unlabeled data and a few seed examples. We use Canonical Correlation Analysis (CCA) to obtain lower dimensional embeddings (representations) for candidate phrases and classify these phrases using a small number of labeled examples. Our method achieves 16.5 and 11.3 F-1 score improvement over co-training on disease and virus NER respectively. We also show that by adding candidate phrase embeddings as features in a sequence tagger gives better performance compared to using word embeddings.", "Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003---significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data." ] }
1504.07149
2208227442
We study the problem of computing all Pareto-optimal journeys in a public transit network regarding the two criteria of arrival time and number of transfers taken. We take a novel approach, focusing on trips and transfers between them, allowing fine-grained modeling. Our experiments on the metropolitan network of London show that the algorithm computes full 24-hour profiles in 70ms after a preprocessing phase of 30s, allowing fast queries in dynamic scenarios.
Some existing approaches solve these problems by modeling timetable information as a graph, using either the or the model. In the (simple) time-expanded model, a node is introduced for each event, such as a train departing or arriving at a station. Edges are then added to connect nodes on the same trip, as well as between nodes belonging to the same stop (corresponding to a passenger waiting for the next train). To model minimum change times, additional nodes and edges are required @cite_14 . One advantage of this model is that all edge weights are constant, which allows the use of speedup techniques developed for road networks, such as contraction. Unfortunately, it turns out that due to different network structures, these techniques do not perform as well for public transit networks @cite_15 . Also, time-expanded graphs are rather large.
{ "cite_N": [ "@cite_15", "@cite_14" ], "mid": [ "1575936991", "1964915778" ], "abstract": [ "Speeding up multi-criteria search in real timetable information systems remains a challenge in spite of impressive progress achieved in recent years for related problems in road networks. Our goal is to perform multi-criteria range queries, that is, to find all Pareto-optimal connections with respect to travel time and number of transfers within a given start time interval. This problem can be modeled as a path search problem in a time- and event-dependent graph. In this paper, we investigate two key speed-up techniques for a multi-criteria variant of 's algorithm --- arc flags and contraction --- which seem to be strong candidates for railway networks, too. We describe in detail how these two techniques have to be adapted for a multi-criteria scenario and explain why we can expect only marginal speed-ups (compared to observations in road networks) from a direct implementation. Based on these insights we extend traditional arc-flags to and introduce as a substitute for node contraction. A computational study on real queries demonstrates that these techniques combined with goal-directed search lead to a speed-up of factor 13.08 over the baseline variant for range queries for a full day.", "We consider two approaches that model timetable information in public transportation systems as shortest-path problems in weighted graphs. In the time-expanded approach, every event at a station, e.g., the departure of a train, is modeled as a node in the graph, while in the time-dependent approach the graph contains only one node per station. Both approaches have been recently considered for (a simplified version of) the earliest arrival problem, but little is known about their relative performance. Thus far, there are only theoretical arguments in favor of the time-dependent approach. In this paper, we provide the first extensive experimental comparison of the two approaches. Using several real-world data sets, we evaluate the performance of the basic models and of several new extensions towards realistic modeling. Furthermore, new insights on solving bicriteria optimization problems in both models are presented. The time-expanded approach turns out to be more robust for modeling more complex scenarios, whereas the time-dependent approach shows a clearly better performance." ] }
1504.07149
2208227442
We study the problem of computing all Pareto-optimal journeys in a public transit network regarding the two criteria of arrival time and number of transfers taken. We take a novel approach, focusing on trips and transfers between them, allowing fine-grained modeling. Our experiments on the metropolitan network of London show that the algorithm computes full 24-hour profiles in 70ms after a preprocessing phase of 30s, allowing fast queries in dynamic scenarios.
The time-dependent approach produces much smaller graphs in comparison. In the simple model, nodes correspond to stops. Edges no longer have constant weight, but are instead associated with (piecewise linear) travel time functions, which map departure times to travel times (or, equivalently, arrival times). The weight then depends on the time at which this function is evaluated. This model can be extended to allow for minimum change times by adding a node for each line at each stop @cite_14 . Some speedup techniques have been applied successfully to time-dependent graphs, such as ALT @cite_1 and Contraction @cite_8 , although not for multi-criteria problems. For these, several extensions to Dijkstra's algorithm exist, among them the @cite_4 , the @cite_13 , the @cite_0 , and the @cite_9 algorithms. However, as Dijkstra-variants, each of them has to perform rather costly priority queue operations.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_9", "@cite_1", "@cite_0", "@cite_13" ], "mid": [ "1964915778", "119169328", "1908662580", "2021134775", "", "2110861948", "2189110458" ], "abstract": [ "We consider two approaches that model timetable information in public transportation systems as shortest-path problems in weighted graphs. In the time-expanded approach, every event at a station, e.g., the departure of a train, is modeled as a node in the graph, while in the time-dependent approach the graph contains only one node per station. Both approaches have been recently considered for (a simplified version of) the earliest arrival problem, but little is known about their relative performance. Thus far, there are only theoretical arguments in favor of the time-dependent approach. In this paper, we provide the first extensive experimental comparison of the two approaches. Using several real-world data sets, we evaluate the performance of the basic models and of several new extensions towards realistic modeling. Furthermore, new insights on solving bicriteria optimization problems in both models are presented. The time-expanded approach turns out to be more robust for modeling more complex scenarios, whereas the time-dependent approach shows a clearly better performance.", "Bicriterion path problems in directed graphs are systematically studied. A series of ten problems are defined and their computational complexity examined. Algorithms are provided for some of them, including polynomial algorithms for the MAXMIN-MAXMIN problem and the MINSUM-MAXMIN problem, and a pseudo-polynomial exact algorithm as well as a fully polynomial approximation scheme for the MINSUM-MINSUM problem.", "We contribute a fast routing algorithm for timetable networks with realistic transfer times. In this setting, our algorithm is the first one that successfully applies precomputation based on node contraction: gradually removing nodes from the graph and adding shortcuts to preserve shortest paths. This reduces query times to 0.5 ms with preprocessing times below 4 minutes on all tested instances, even on continental networks with 30 000 stations. We achieve this by an improved contraction algorithm and by using a station graph model. Every node in our graph has a one-to-one correspondence to a station and every edge has an assigned collection of connections. Also, our graph model does not require parallel edges.", "Exploiting parallelism in route planning algorithms is a challenging algorithmic problem with obvious applications in mobile navigation and timetable information systems. In this work, we present a novel algorithm for the one-to-all profile-search problem in public transportation networks. It answers the question for all fastest connections between a given station S and any other station at any time of the day in a single query. This algorithm allows for a very natural parallelization, yielding excellent speed-ups on standard multicore servers. Our approach exploits the facts that, first, time-dependent travel-time functions in such networks can be represented as a special class of piecewise linear functions and, second, only few connections from S are useful to travel far away. Introducing the connection-setting property, we are able to extend Dijkstra's algorithm in a sound manner. Furthermore, we also accelerate station-to-station queries by preprocessing important connections within the public transportation network. As a result, we are able to compute all relevant connections between two random stations in a complete public transportation network of a big city (New York) on a standard multi-core server in real time.", "", "Abstract We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries for travelers using a train system. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.", "We consider the problem of computing shortest paths through a dynamic network – a network with time-varying characteristics, such as arc travel times and costs, which are known for all values of time. Many types of networks, most notably transportation networks, exhibit such predictable dynamic behavior over the course of time. Dynamic shortest path problems are currently solved in practice by algorithms which operate within a discrete-time framework. In this thesis, we introduce a new set of algorithms for computing shortest paths in continuous-time dynamic networks, and demonstrate for the first time in the literature the feasibility and the advantages of solving dynamic shortest path problems in continuous time. We assume that all time-dependent network data functions are given as piece-wise linear functions of time, a representation capable of easily modeling most common dynamic problems. Additionally, this form of representation and the solution algorithms developed in this thesis are well suited for many augmented static problems such as time-constrained minimum-cost shortest path problems and shortest path problems with time windows. We discuss the classification, formulation, and mathematical properties of all common variants of the continuous-time dynamic shortest path problem. Two classes of solution algorithms are introduced, both of which are shown to solve all variants of the problem. In problems where arc travel time functions exhibit First-In-First-Out (FIFO) behavior, we show that these algorithms have polynomial running time; although the general problem is NP-hard, we argue that the average-case running time for many common problems should be quite reasonable. Computational results are given which support the theoretical analysis of these algorithms, and which provide a comparison with existing discrete-time algorithms; in most cases, continuous-time approaches are shown to be much more efficient, both in running time and storage requirements, than their discretetime counterparts. Finally, in order to further reduce computation time, we introduce parallel algorithms, and hybrid continuous-discrete approximation algorithms which exploit favorable characteristics of algorithms from both domains. Thesis Supervisor: Ismail Chabini Title: Assistant Professor, Department of Civil and Environmental Engineering, Massachusetts Institute of Technology" ] }
1504.07149
2208227442
We study the problem of computing all Pareto-optimal journeys in a public transit network regarding the two criteria of arrival time and number of transfers taken. We take a novel approach, focusing on trips and transfers between them, allowing fine-grained modeling. Our experiments on the metropolitan network of London show that the algorithm computes full 24-hour profiles in 70ms after a preprocessing phase of 30s, allowing fast queries in dynamic scenarios.
Other approaches do not use graphs at all. (Round-bAsed Public Transit Optimized Router) @cite_16 is a dynamic program. In each round, it computes earliest arrival times for journeys with @math transfers, where @math is the current round number. It does this by scanning along lines and, at each stop, checking for the earliest trip of that line that can be reached. It outperforms Dijkstra-based approaches in practice. The @cite_6 operates on (trip segments of length @math ). It orders them by departure time into a single array. During queries, this array is then scanned once, which is very fast in practice due to the linear memory access pattern.
{ "cite_N": [ "@cite_16", "@cite_6" ], "mid": [ "2101810346", "115854788" ], "abstract": [ "We study the problem of computing all Pareto-optimal journeys in a dynamic public transit network for multiple criteria, such as arrival time and number of transfers. Existing algorithms consider this as a graph problem and solve it using various graph search algorithms. Unfortunately, this leads to either high query times or suboptimal solutions. We take a different approach. We introduce RAPTOR, our novel round-based public transit router. Unlike previous algorithms, it is not Dijkstra-based, looks at each route such as a bus line in the network at most once per round, and can be made even faster with simple pruning rules and parallelization using multiple cores. Because it does not rely on preprocessing, RAPTOR works in fully dynamic scenarios. Starting from arrival time and number of transfers as criteria, it can be easily extended to handle flexible departure times or arbitrary additional criteria. As practical examples we consider fare zones and reliability of transfers. When run on complex public transportation networks such as London, RAPTOR computes all Pareto-optimal journeys between two random locations an order of magnitude faster than previous approaches, which easily enables interactive applications.", "This paper studies the problem of computing optimal journeys in dynamic public transit networks. We introduce a novel algorithmic framework, called Connection Scan Algorithm (CSA), to compute journeys. It organizes data as a single array of connections, which it scans once per query. Despite its simplicity, our algorithm is very versatile. We use it to solve earliest arrival and multi-criteria profile queries. Moreover, we extend it to handle the minimum expected arrival time (MEAT) problem, which incorporates stochastic delays on the vehicles and asks for a set of (alternative) journeys that in its entirety minimizes the user’s expected arrival time at the destination. Our experiments on the dense metropolitan network of London show that CSA computes MEAT queries, our most complex scenario, in 272 ms on average." ] }
1504.07149
2208227442
We study the problem of computing all Pareto-optimal journeys in a public transit network regarding the two criteria of arrival time and number of transfers taken. We take a novel approach, focusing on trips and transfers between them, allowing fine-grained modeling. Our experiments on the metropolitan network of London show that the algorithm computes full 24-hour profiles in 70ms after a preprocessing phase of 30s, allowing fast queries in dynamic scenarios.
A number of speedup techniques have been developed for public transit routing. @cite_10 @cite_3 is based on the observation that for many optimal journeys, the sequence of stops where transfers occur is the same. By precomputing these transfer patterns, journeys can be computed very quickly at query time. @cite_11 applies recent advances in hub labeling to public transit networks, resulting in very fast query times. Another example is the @cite_2 , which combines CSA with multilevel overlay graphs to speed up queries on large networks. The algorithm presented in this work, however, is a new base algorithm; development of further speedup techniques is a subject for future research.
{ "cite_N": [ "@cite_2", "@cite_10", "@cite_3", "@cite_11" ], "mid": [ "2259825555", "1489358540", "1985418124", "805408692" ], "abstract": [ "We study the problem of efficiently computing journeys in timetable networks. Our algorithm optimally answers profile queries, computing all journeys given a time interval. Our study demonstrates that queries can be answered optimally on large country-scale timetable networks within several milliseconds and fast delay integration is possible. Previous work either had to drop optimality or only considered comparatively small timetable networks. Our technique is a combination of the Connection Scan Algorithm and multilevel overlay graphs.", "We show how to route on very large public transportation networks (up to half a billion arcs) with average query times of a few milliseconds. We take into account many realistic features like: traffic days, walking between stations, queries between geographic locations instead of a source and a target station, and multi-criteria cost functions. Our algorithm is based on two key observations: (1) many shortest paths share the same transfer pattern, i.e., the sequence of stations where a change of vehicle occurs; (2) direct connections without change of vehicle can be looked up quickly. We precompute the respective data; in practice, this can be done in time linear in the network size, at the expense of a small fraction of non-optimal results. We have accelerated public transportation routing on Google Maps with a system based on our ideas. We report experimental results for three data sets of various kinds and sizes.", "We consider the application of route planning in large public-transportation networks (buses, trains, subways, etc). Many connections in such networks are operated at periodic time intervals. When a set of connections has sufficient periodicity, it becomes more efficient to store the time range and frequency (e.g., every 15 minutes from 8:00am-6:00pm) instead of storing each of the time events separately. Identifying an optimal frequency-compression is NP-hard, so we present a time- and space-efficient heuristic. We show how we can use this compression to not only save space but also query time. We particularly consider profile queries, which ask for all optimal routes with departure times in a given interval (e.g., a whole day). In particular, we design a new version of Dijkstra's algorithm that works with frequency-based labels and is suitable for profile queries. We evaluate the savings of our approach on two metropolitan and three country-wide public-transportation networks. On our largest network, we simultaneously achieve a better space consumption than all previous methods as well as profile query times that are about 5 times faster than the best previous method. We also improve Transfer Patterns, a state-of-the-art technique for fully realistic route planning in large public-transportation networks. In particular, we accelerate the expensive preprocessing by a factor of 60 compared to the original publication.", "We study the journey planning problem in public transit networks. Developing efficient preprocessing-based speedup techniques for this problem has been challenging: current approaches either require massive preprocessing effort or provide limited speedups. Leveraging recent advances in Hub Labeling, the fastest algorithm for road networks, we revisit the well-known time-expanded model for public transit. Exploiting domain-specific properties, we provide simple and efficient algorithms for the earliest arrival, profile, and multicriteria problems, with queries that are orders of magnitude faster than the state of the art." ] }
1504.07098
769399833
The November 2014 Australian State of Victoria election was the first statutory political election worldwide at State level which deployed an end-to-end verifiable electronic voting system in polling places. This was the first time blind voters have been able to cast a fully secret ballot in a verifiable way, and the first time a verifiable voting system has been used to collect remote votes in a political election. The code is open source, and the output from the election is verifiable. The system took 1121 votes from these particular groups, an increase on 2010 and with fewer polling places.
The only end-to-end verifiable political elections to date have taken place in Takoma Park, Maryland, US, where the Scantegrity system was successfully used in 2009 and 2011 in the municipal election for mayor and city council members @cite_14 . This groundbreaking work demonstrated the feasibility of running an election in a verifiable way. However, the Scantegrity system becomes impractical with a preferential ballot with 40 candidates, and would require non-trivial changes to its design to handle a state-wide election.
{ "cite_N": [ "@cite_14" ], "mid": [ "1598516026" ], "abstract": [ "On November 3, 2009, voters in Takoma Park, Maryland, cast ballots for the mayor and city council members using the Scantegrity II voting system--the first time any end-to-end (E2E) voting system with ballot privacy has been used in a binding governmental election. This case study describes the various efforts that went into the election--including the improved design and implementation of the voting system, streamlined procedures, agreements with the city, and assessments of the experiences of voters and poll workers. The election, with 1728 voters from six wards, involved paper ballots with invisible-ink confirmation codes, instant-runoff voting with write-ins, early and absentee (mail-in) voting, dual-language ballots, provisional ballots, privacy sleeves, any-which-way scanning with parallel conventional desktop scanners, end-to-end verifiability based on optional web-based voter verification of votes cast, a full hand recount, thresholded authorities, three independent outside auditors, fully-disclosed software, and exit surveys for voters and pollworkers. Despite some glitches, the use of Scantegrity II was a success, demonstrating that E2E cryptographic voting systems can be effectively used and accepted by the general public." ] }
1504.07098
769399833
The November 2014 Australian State of Victoria election was the first statutory political election worldwide at State level which deployed an end-to-end verifiable electronic voting system in polling places. This was the first time blind voters have been able to cast a fully secret ballot in a verifiable way, and the first time a verifiable voting system has been used to collect remote votes in a political election. The code is open source, and the output from the election is verifiable. The system took 1121 votes from these particular groups, an increase on 2010 and with fewer polling places.
Other verifiable systems such as Helios @cite_16 and Wombat @cite_10 have been used for Student Union and other non-political elections, but scaling them up to politically binding elections and hardening them for a more robust environment is challenging, and in their present form they are also not so suitable for elections as complex as those in Victoria. Other electronic elections that have been fielded (such as those recently fielded in Estonia, Norway, Canada, and New South Wales) do not provide end-to-end verifiability. Some of them allow voters to verify some aspect of their vote (for example, that the system has a vote with a particular receipt number) but do not provide verifiability all the way from the vote cast through to the final tally.
{ "cite_N": [ "@cite_16", "@cite_10" ], "mid": [ "40134741", "798183736" ], "abstract": [ "Voting with cryptographic auditing, sometimes called open-audit voting, has remained, for the most part, a theoretical endeavor. In spite of dozens of fascinating protocols and recent ground-breaking advances in the field, there exist only a handful of specialized implementations that few people have experienced directly. As a result, the benefits of cryptographically audited elections have remained elusive. We present Helios, the first web-based, open-audit voting system. Helios is publicly accessible today: anyone can create and run an election, and any willing observer can audit the entire process. Helios is ideal for on-line software communities, local clubs, student government, and other environments where trustworthy, secret-ballot elections are required but coercion is not a serious concern. With Helios, we hope to expose many to the power of open-audit elections.", "We report on the design and implementation of a new cryptographic voting system, designed to retain the \"look and feel\" of standard, paper-based voting used in our country Israel while enhancing security with end-to-end verifiability guaranteed by cryptographic voting. Our system is dual ballot and runs two voting processes in parallel: one is electronic while the other is paper-based and similar to the traditional process used in Israel. Consistency between the two processes is enforced by means of a new, specially-tailored paper ballot format. We examined the practicality and usability of our protocol through implementation and field testing in two elections: the first being a student council election with over 2000 voters, the second a political party's election for choosing their leader. We present our findings, some of which were extracted from a survey we conducted during the first election. Overall, voters trusted the system and found it comfortable to use." ] }
1504.06755
1831674524
Traditional eye tracking requires specialized hardware, which means collecting gaze data from many observers is expensive, tedious and slow. Therefore, existing saliency prediction datasets are order-of-magnitudes smaller than typical datasets for other vision recognition tasks. The small size of these datasets limits the potential for training data intensive algorithms, and causes overfitting in benchmark evaluation. To address this deficiency, this paper introduces a webcam-based gaze tracking system that supports large-scale, crowdsourced eye tracking deployed on Amazon Mechanical Turk (AMTurk). By a combination of careful algorithm and gaming protocol design, our system obtains eye tracking data for saliency prediction comparable to data gathered in a traditional lab setting, with relatively lower cost and less effort on the part of the researchers. Using this tool, we build a saliency dataset for a large number of natural images. We will open-source our tool and provide a web server where researchers can upload their images to get eye tracking results from AMTurk.
There exist two main categories of computer vision based gaze prediction methods: feature-based and appearance-based methods @cite_30 . Feature-based methods extract small scale eye features such as iris contour @cite_25 , corneal infrared reflections and pupil center @cite_14 from high resolution infrared imaging, and use eyeball modeling @cite_21 or geometric method @cite_19 to estimate the gaze direction. This approach requires a well-controlled environment and special devices such as infrared zoom-in cameras and pan-tilt unit @cite_41 . In addition, the accuracy depends heavily on the system calibration. On the other hand, appearance-based methods use the entire eye image as a high-dimensional input to a machine learning framework to train a model for gaze prediction, so the image from a standard camera or webcam is sufficient.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_41", "@cite_21", "@cite_19", "@cite_25" ], "mid": [ "2167020116", "2169837008", "2077819131", "2158363907", "2166964954", "2155591525" ], "abstract": [ "Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond.", "The ubiquitous application of eye tracking is precluded by the requirement of dedicated and expensive hardware, such as infrared high definition cameras. Therefore, systems based solely on appearance (i.e. not involving active infrared illumination) are being proposed in literature. However, although these systems are able to successfully locate eyes, their accuracy is significantly lower than commercial eye tracking devices. Our aim is to perform very accurate eye center location and tracking, using a simple Web cam. By means of a novel relevance mechanism, the proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve rotational invariance and to keep low computational costs. In this paper we test our approach for accurate eye location and robustness to changes in illumination and pose, using the BioIDand the Yale Face B databases, respectively. We demonstrate that our system can achieve a considerable improvement in accuracy over state of the art techniques.", "This paper presents a review of eye gaze tracking technology and focuses on recent advancements that might facilitate its use in general computer applications. Early eye gaze tracking devices were appropriate for scientific exploration in controlled environments. Although it has been thought for long that they have the potential to become important computer input devices as well, the technology still lacks important usability requirements that hinders its applicability. We present a detailed description of the pupil-corneal reflection technique due to its claimed usability advantages, and show that this method is still not quite appropriate for general interactive applications. Finally, we present several recent techniques for remote eye gaze tracking with improved usability. These new solutions simplify or eliminate the calibration procedure and allow free head motion.", "This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented.", "Eye gaze estimation systems calculate the direction of human eye gaze. Numerous accurate eye gaze estimation systems considering a user's head movement have been reported. Although the systems allow large head motion, they require multiple devices and complicate computation in order to obtain the geometrical positions of an eye, cameras, and a monitor. The light-reflection-based method proposed in this paper does not require any knowledge of their positions, so the system utilizing the proposed method is lighter and easier to use than the conventional systems. To estimate where the user looks allowing ample head movement, we utilize an invariant value (cross-ratio) of a projective space. Also, a robust feature detection using an ellipse-specific active contour is suggested in order to find features exactly. Our proposed feature detection and estimation method are simple and fast, and shows accurate results under large head motion.", "We present a novel approach, called the \"one-circle \" algorithm, for measuring the eye gaze using a monocular image that zooms in on only one eye of a person. Observing that the iris contour is a circle, we estimate the normal direction of this iris circle, considered as the eye gaze, from its elliptical image. From basic projective geometry, an ellipse can be back-projected into space onto two circles of different orientations. However, by using an anthropometric property of the eyeball, the correct solution can be disambiguated. This allows us to obtain a higher resolution image of the iris with a zoom-in camera and thereby achieving higher accuracies in the estimation. The robustness of our gaze determination approach was verified statistically by the extensive experiments on synthetic and real image data. The two key contributions are that we show the possibility of finding the unique eye gaze direction from a single image of one eye and that one can obtain better accuracy as a consequence of this." ] }
1504.06755
1831674524
Traditional eye tracking requires specialized hardware, which means collecting gaze data from many observers is expensive, tedious and slow. Therefore, existing saliency prediction datasets are order-of-magnitudes smaller than typical datasets for other vision recognition tasks. The small size of these datasets limits the potential for training data intensive algorithms, and causes overfitting in benchmark evaluation. To address this deficiency, this paper introduces a webcam-based gaze tracking system that supports large-scale, crowdsourced eye tracking deployed on Amazon Mechanical Turk (AMTurk). By a combination of careful algorithm and gaming protocol design, our system obtains eye tracking data for saliency prediction comparable to data gathered in a traditional lab setting, with relatively lower cost and less effort on the part of the researchers. Using this tool, we build a saliency dataset for a large number of natural images. We will open-source our tool and provide a web server where researchers can upload their images to get eye tracking results from AMTurk.
Recently, Jiang al @cite_0 designed a mouse-contingent paradigm to record viewing behavior. They demonstrated that the mouse-tracking saliency maps are similar to eye tracking saliency maps based on shuffled AUC (sAUC) @cite_22 . However, mouse movements are much slower than eye movements, and the gaze pattern of individual viewers in mouse and eye tracking systems are qualitatively different. It is not clear that if the "fixations" generated from mouse tracking match those obtained from standard eye tracking (the authors propose extracting fixations by excluding half samples with high mouse-moving velocity, but this leaves up to 100 fixations per second, far different from the approximately 3 fixations second observed in standard eye tracking data). Furthermore, since mouse movements are slower than eye movements, it's unclear if this approach will work for video eye tracking, or if it can be used in psychophysics experiments which require rapid responses.
{ "cite_N": [ "@cite_0", "@cite_22" ], "mid": [ "1934890906", "2133589685" ], "abstract": [ "Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. This paper presents a new method to collect large-scale human data during natural explorations on images. While current datasets present a rich set of images and task-specific annotations such as category labels and object segments, this work focuses on recording and logging how humans shift their attention during visual exploration. The goal is to offer new possibilities to (1) complement task-specific annotations to advance the ultimate goal in visual understanding, and (2) understand visual attention and learn saliency models, all with human attentional data at a much larger scale. We designed a mouse-contingent multi-resolutional paradigm based on neurophysiological and psychophysical studies of peripheral vision, to simulate the natural viewing behavior of humans. The new paradigm allowed using a general-purpose mouse instead of an eye tracker to record viewing behaviors, thus enabling large-scale data collection. The paradigm was validated with controlled laboratory as well as large-scale online data. We report in this paper a proof-of-concept SALICON dataset of human “free-viewing” data on 10,000 images from the Microsoft COCO (MS COCO) dataset with rich contextual information. We evaluated the use of the collected data in the context of saliency prediction, and demonstrated them a good source as ground truth for the evaluation of saliency algorithms.", "We propose a definition of saliency by considering what the visual system is trying to optimize when directing attention. The resulting model is a Bayesian framework from which bottom-up saliency emerges naturally as the self-information of visual features, and overall saliency (incorporating top-down information with bottom-up saliency) emerges as the pointwise mutual information between the features and the target when searching for a target. An implementation of our framework demonstrates that our model’s bottom-up saliency maps perform as well as or better than existing algorithms in predicting people’s fixations in free viewing. Unlike existing saliency measures, which depend on the statistics of the particular image being viewed, our measure of saliency is derived from natural image statistics, obtained in advance from a collection of natural images. For this reason, we call our model SUN (Saliency Using Natural statistics). A measure of saliency based on natural image statistics, rather than based on a single test image, provides a straightforward explanation for many search asymmetries observed in humans; the statistics of a single test image lead to predictions that are not consistent with these asymmetries. In our model, saliency is computed locally, which is consistent with the neuroanatomy of the early visual system and results in an efficient algorithm with few free parameters." ] }
1504.06755
1831674524
Traditional eye tracking requires specialized hardware, which means collecting gaze data from many observers is expensive, tedious and slow. Therefore, existing saliency prediction datasets are order-of-magnitudes smaller than typical datasets for other vision recognition tasks. The small size of these datasets limits the potential for training data intensive algorithms, and causes overfitting in benchmark evaluation. To address this deficiency, this paper introduces a webcam-based gaze tracking system that supports large-scale, crowdsourced eye tracking deployed on Amazon Mechanical Turk (AMTurk). By a combination of careful algorithm and gaming protocol design, our system obtains eye tracking data for saliency prediction comparable to data gathered in a traditional lab setting, with relatively lower cost and less effort on the part of the researchers. Using this tool, we build a saliency dataset for a large number of natural images. We will open-source our tool and provide a web server where researchers can upload their images to get eye tracking results from AMTurk.
Finally, general image labelling tools such as LabelMe @cite_32 can be used to identify salient objects and regions in images by crowdsourcing. This provides valuable data for developing saliency models, but is not a substitute for real-time eye tracking.
{ "cite_N": [ "@cite_32" ], "mid": [ "2110764733" ], "abstract": [ "We seek to build a large collection of images with ground truth labels to be used for object detection and recognition research. Such data is useful for supervised learning and quantitative evaluation. To achieve this, we developed a web-based tool that allows easy image annotation and instant sharing of such annotations. Using this annotation tool, we have collected a large dataset that spans many object categories, often containing multiple instances over a wide variety of images. We quantify the contents of the dataset and compare against existing state of the art datasets used for object recognition and detection. Also, we show how to extend the dataset to automatically enhance object labels with WordNet, discover object parts, recover a depth ordering of objects in a scene, and increase the number of labels using minimal user supervision and images from the web." ] }
1504.06755
1831674524
Traditional eye tracking requires specialized hardware, which means collecting gaze data from many observers is expensive, tedious and slow. Therefore, existing saliency prediction datasets are order-of-magnitudes smaller than typical datasets for other vision recognition tasks. The small size of these datasets limits the potential for training data intensive algorithms, and causes overfitting in benchmark evaluation. To address this deficiency, this paper introduces a webcam-based gaze tracking system that supports large-scale, crowdsourced eye tracking deployed on Amazon Mechanical Turk (AMTurk). By a combination of careful algorithm and gaming protocol design, our system obtains eye tracking data for saliency prediction comparable to data gathered in a traditional lab setting, with relatively lower cost and less effort on the part of the researchers. Using this tool, we build a saliency dataset for a large number of natural images. We will open-source our tool and provide a web server where researchers can upload their images to get eye tracking results from AMTurk.
In-lab eye tracking has been used to create data sets of fixations on images and videos. The data sets differ in several key parameters @cite_34 , including: the number and style of images video chosen, the number of subjects, the number of views per image, the subject’s distance from the screen, eye trackers, the exact task the subjects were given (free viewing @cite_9 , object search @cite_10 , person search @cite_24 , image rating @cite_2 or memory task @cite_13 ), but each helps us understand where people actually look and can be used to measure performance of saliency models. The majority of eye tracking data is on static images. The most common task is free viewing which consists of participants simply viewing images (for 2-15 seconds) or short videos clips @cite_8 without no particular task in mind.
{ "cite_N": [ "@cite_13", "@cite_8", "@cite_9", "@cite_24", "@cite_2", "@cite_34", "@cite_10" ], "mid": [ "1510835000", "2071555787", "", "2153839765", "1992291380", "2063608179", "" ], "abstract": [ "For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene. Where eye tracking devices are not a viable option, models of saliency can be used to predict fixation locations. Most saliency approaches are based on bottom-up computation that does not consider top-down image semantics and often does not match actual eye movements. To address this problem, we collected eye tracking data of 15 viewers on 1003 images and use this database as training and testing examples to learn a model of saliency based on low, middle and high-level image features. This large database of eye tracking data is publicly available with this paper.", "Systems based on bag-of-words models from image features collected at maxima of sparse interest point operators have been used successfully for both computer visual object and action recognition tasks. While the sparse, interest-point based approach to recognition is not inconsistent with visual processing in biological systems that operate in ‘saccade and fixate’ regimes, the methodology and emphasis in the human and the computer vision communities remains sharply distinct. Here, we make three contributions aiming to bridge this gap. First, we complement existing state-of-the art large scale dynamic computer vision annotated datasets like Hollywood-2 [1] and UCF Sports [2] with human eye movements collected under the ecological constraints of visual action and scene context recognition tasks. To our knowledge these are the first large human eye tracking datasets to be collected and made publicly available for video, vision.imar.ro eyetracking (497,107 frames, each viewed by 19 subjects), unique in terms of their (a) large scale and computer vision relevance, (b) dynamic, video stimuli, (c) task control, as well as free-viewing . Second, we introduce novel dynamic consistency and alignment measures , which underline the remarkable stability of patterns of visual search among subjects. Third, we leverage the significant amount of collected data in order to pursue studies and build automatic, end-to-end trainable computer vision systems based on human eye movements. Our studies not only shed light on the differences between computer vision spatio-temporal interest point image sampling strategies and the human fixations, as well as their impact for visual recognition performance, but also demonstrate that human fixations can be accurately predicted, and when used in an end-to-end automatic system, leveraging some of the advanced computer vision practice, can lead to state of the art results.", "", "How predictable are human eye movements during search in real world scenes? We recorded 14 observers’ eye movements as they performed a search task (person detection) in 912 outdoor scenes. Observers were highly consistent in the regions fixated during search, even when the target was absent from the scene. These eye movements were used to evaluate computational models of search guidance from three sources: saliency, target features, and scene context. Each of these models independently outperformed a cross-image control in predicting human fixations. Models that combined sources of guidance ultimately predicted 94 of human agreement, with the scene context component providing the most explanatory power. None of the models, however, could reach the precision and fidelity of an attentional map defined by human fixations. This work puts forth a benchmark for computational models of search in real world scenes. Further improvements in", "This paper presents the results of two psychophysical experiments and an associated computational analysis designed to quantify the relationship between visual salience and visual importance. In the first experiment, importance maps were collected by asking human subjects to rate the relative visual importance of each object within a database of hand-segmented images. In the second experiment, experimental saliency maps were computed from visual gaze patterns measured for these same images by using an eye-tracker and task-free viewing. By comparing the importance maps with the saliency maps, we found that the maps are related, but perhaps less than one might expect. When coupled with the segmentation information, the saliency maps were shown to be effective at predicting the main subjects. However, the saliency maps were less effective at predicting the objects of secondary importance and the unimportant objects. We also found that the vast majority of early gaze position samples (0-2000 ms) were made on the main subjects, suggesting that a possible strategy of early visual coding might be to quickly locate the main subject(s) in the scene.", "Significant recent progress has been made in developing high-quality saliency models. However, less effort has been undertaken on fair assessment of these models, over large standardized datasets and correctly addressing confounding factors. In this study, we pursue a critical and quantitative look at challenges (e.g., center-bias, map smoothing) in saliency modeling and the way they affect model accuracy. We quantitatively compare 32 state-of-the-art models (using the shuffled AUC score to discount center-bias) on 4 benchmark eye movement datasets, for prediction of human fixation locations and scan path sequence. We also account for the role of map smoothing. We find that, although model rankings vary, some (e.g., AWS, LG, AIM, and HouNIPS) consistently outperform other models over all datasets. Some models work well for prediction of both fixation locations and scan path sequence (e.g., Judd, GBVS). Our results show low prediction accuracy for models over emotional stimuli from the NUSEF dataset. Our last benchmark, for the first time, gauges the ability of models to decode the stimulus category from statistics of fixations, saccades, and model saliency values at fixated locations. In this test, ITTI and AIM models win over other models. Our benchmark provides a comprehensive high-level picture of the strengths and weaknesses of many popular models, and suggests future research directions in saliency modeling.", "" ] }
1504.06840
2146999606
Let D(n,r) be a random r-out regular directed multigraph on the set of vertices 1,...,n . In this work, we establish that for every r � 2, there existsr > 0 such that diam(D(n,r)) = (1 + �r + o(1))log r n. Our techniques also allow us to bound some extremal quantities related to the stationary distribution of a simple random walk on D(n,r). In particular, we determine the asymptotic behaviour ofmax andmin, the maximum and the minimum values of the stationary distribution. We show that with
One possibility is to study the average case. This approach can be formalized by considering regular languages defined by random DFAs. In particular, one can ask for an algorithm that with high probability (as the number of the states in the DFA goes to infinity) can learn the regular language recognized by a random DFA. There exists evidence suggesting that such relaxation might not be enough to achieve efficient learning in general: it was recently showed by that generic instances of DFA (as well as decision trees and DNF formulas) are hard to learn from statistical queries when examples can be sampled from an arbitrary distribution @cite_4 . Nevertheless, prior to 's result it was showed that generic decision trees and generic DNF formulas can be efficiently learned when samples are drawn according to the uniform distribution @cite_3 @cite_11 .
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_11" ], "mid": [ "2106435631", "2046054273", "2157754126" ], "abstract": [ "We show that random DNF formulas, random log-depth decision trees and random deterministic finite acceptors cannot be weakly learned with a polynomial number of statistical queries with respect to an arbitrary distribution on examples.", "We consider three natural models of random logarithmic depth decision trees over Boolean variables. We give an efficient algorithm that for each of these models learns all but an inverse polynomial fraction of such trees using only uniformly distributed random examples from 0,1 n. The learning algorithm constructs a decision tree as its hypothesis.", "We show that randomly generated c log(n)-DNF formula can be learned exactly in probabilistic polynomial time using randomly generated examples. Our notion of randomly generated is with respect to a uniform distribution. To prove this we extend the concept of well behaved c log(n)-Monotone DNF formulae to c log(n)-DNF formulae, and show that almost every DNF formula is well-behaved, and that there exists a probabilistic polynomial time algorithm that exactly learns all well behaved c log(n)-DNF formula. This is the first algorithm that properly learns (non-monotone) DNF with a polynomial number of terms from random examples alone." ] }
1504.06840
2146999606
Let D(n,r) be a random r-out regular directed multigraph on the set of vertices 1,...,n . In this work, we establish that for every r � 2, there existsr > 0 such that diam(D(n,r)) = (1 + �r + o(1))log r n. Our techniques also allow us to bound some extremal quantities related to the stationary distribution of a simple random walk on D(n,r). In particular, we determine the asymptotic behaviour ofmax andmin, the maximum and the minimum values of the stationary distribution. We show that with
The results in @cite_24 establish that in order to answer the second question, it would be sufficient to understand several specific properties of a random walk on a randomly generated DFA. When a string is sampled from the uniform distribution over @math and is labeled according to the state that it reaches, the label immediately corresponds to the final state of a simple random walk of length @math over the DFA starting from the initial state. Thus, the analysis of the algorithm in @cite_24 relies on bounds on the diameter, stationary distribution, and mixing time on random @math -out regular digraphs. Similar ideas are what led us to the study of the problems discussed in the present paper.
{ "cite_N": [ "@cite_24" ], "mid": [ "2260314326" ], "abstract": [ "Deterministic finite automata DFA have long served as a fundamental computational model in the study of theoretical computer science, and the problem of learning a DFA from given input data is a classic topic in computational learning theory. In this paper we study the learnability of a random DFA and propose a computationally efficient algorithm for learning and recovering a random DFA from uniform input strings and state information in the statistical query model. A random DFA is uniformly generated: for each state-symbol pair @math , we choose a state @math with replacement uniformly and independently at random and let @math , where Q is the state space, @math is the alphabet and @math is the transition function. The given data are string-state pairs x,i¾?q where x is a string drawn uniformly at random and q is the state of the DFA reached on input x starting from the start state @math . A theoretical guarantee on the maximum absolute error of the algorithm in the statistical query model is presented. Extensive experiments demonstrate the efficiency and accuracy of the algorithm." ] }
1504.06840
2146999606
Let D(n,r) be a random r-out regular directed multigraph on the set of vertices 1,...,n . In this work, we establish that for every r � 2, there existsr > 0 such that diam(D(n,r)) = (1 + �r + o(1))log r n. Our techniques also allow us to bound some extremal quantities related to the stationary distribution of a simple random walk on D(n,r). In particular, we determine the asymptotic behaviour ofmax andmin, the maximum and the minimum values of the stationary distribution. We show that with
Several other properties of random DFAs have been studied, both in learning theory and in other contexts, using the @math model. For example, first Korshunov's group, and later Nicaud's group, have studied the probability that random DFA exhibit particular structures, mainly motivated by the analysis of sample and reject algorithms for enumeration of subclasses of automata (see @cite_0 and references therein). Motivated by worst-case hardness results for learning a DFA, Angluin and co-authors have used properties of random DFAs to study the problem of learning a generic DFA @cite_2 @cite_4 . The average-case complexity of DFA minimization algorithms has also received some attention recently @cite_21 @cite_8 . Finally, a series of results have led to a solution of the long-standing C ern 'y conjecture about synchronization of finite automata in the case of random DFA @cite_5 @cite_6 @cite_17 .
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_21", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_17" ], "mid": [ "2106435631", "35268004", "2088285637", "2295392808", "2217322393", "94998075", "142090792", "1522735212" ], "abstract": [ "We show that random DNF formulas, random log-depth decision trees and random deterministic finite acceptors cannot be weakly learned with a polynomial number of statistical queries with respect to an arbitrary distribution on examples.", "We study the number of states of the minimal automaton of the mirror of a rational language recognized by a random deterministic automaton with n states. We prove that, for any d > 0, the probability that this number of states is greater than n d tends to 1 as n tends to infinity. As a consequence, the generic and average complexities of Brzozowski minimization algorithm are super-polynomial for the uniform distribution on deterministic automata.", "We prove that the average complexity of Moore’s state minimization algorithm is @math , where n is the number of states of the input and k the size of the alphabet. This result holds for a whole family of probabilistic models on automata, including the uniform distribution over deterministic and accessible automata, as well as uniform distributions over classical subclasses, such as complete automata, acyclic automata, automata where each state is final with probability γ∈(0,1), and many other variations.", "A synchronizing word for an automaton is a word that brings that automaton into one and the same state, regardless of the starting position. Cerny conjectured in 1964 that if a n-state deterministic automaton has a synchronizing word, then it has a synchronizing word of size at most (n-1)^2. Berlinkov recently made a breakthrough in the probabilistic analysis of synchronization by proving that with high probability, an automaton has a synchronizing word. In this article, we prove that with high probability an automaton admits a synchronizing word of length smaller than n^(1+ ), and therefore that the Cerny conjecture holds with high probability.", "In this article, we consider deterministic automata under the paradigm of average case analysis of algorithms. We present the main results obtained in the literature using this point of view, from the very beginning with Korshunov’s theorem about the asymptotic number of accessible automata to the most recent advances, such as the average running time of Moore’s state minimization algorithm or the estimation of the probability that an automaton is minimal. While focusing on results, we also try to give an idea of the main tools used in this field.", "We consider the problem of learning a finite automaton M of n states with input alphabet X and output alphabet Y when a teacher has helpfully or randomly labeled the states of M using labels from a set L. The learner has access to label queries; a label query with input string w returns both the output and the label of the state reached by w. Because different automata may have the same output behavior, we consider the case in which the teacher may \"unfold\" M to an output equivalent machine M′ and label the states of M′ for the learner. We give lower and upper bounds on the number of label queries to learn the output behavior of M in these different scenarios. We also briefly consider the case of randomly labeled automata with randomly chosen transition functions.", "We prove that a random automaton with @math states and any fixed non-singleton alphabet is synchronizing with high probability. Moreover, we also prove that the convergence speed is exactly @math as conjectured by Cameron CamConj for the most interesting 2-letter alphabet case.", "Conjecture that any synchronizing automaton with n states has a reset word of length (n - 1)(2) was made by. Cerny in 1964. Notwithstanding the numerous attempts made by various researchers this conjecture hasn't been definitively proven yet. In this paper we study a random automaton that is sampled uniformly at random from the set of all automata with n states and m(n) letters. We show that for m(n) > 18 ln n any random automaton is synchronizing with high probability. For m(n) > n(beta), beta > 1 2 we also show that any random automaton with high probability satisfies the. Cerny conjecture." ] }
1504.06840
2146999606
Let D(n,r) be a random r-out regular directed multigraph on the set of vertices 1,...,n . In this work, we establish that for every r � 2, there existsr > 0 such that diam(D(n,r)) = (1 + �r + o(1))log r n. Our techniques also allow us to bound some extremal quantities related to the stationary distribution of a simple random walk on D(n,r). In particular, we determine the asymptotic behaviour ofmax andmin, the maximum and the minimum values of the stationary distribution. We show that with
There is interesting related work on distances in graphs with random edge weights. We mention in particular the paper of Janson @cite_13 on typical and extreme distances in randomly edge-weighted complete graphs, and the subsequent work by Bhamidi and van der Hofstad @cite_31 , which establishes distributional convergence for the diameter.
{ "cite_N": [ "@cite_31", "@cite_13" ], "mid": [ "2962725771", "2031541804" ], "abstract": [ "We consider the complete graph K_n on n vertices with exponential mean n edge lengths. Writing C_ ij for the weight of the smallest-weight path between vertex i,j [n], Janson [17] showed that max_ i,j [n] C_ ij log n converges in probability to 3. We extend this results by showing that max_ i,j [n] C_ ij - 3 log n converges in distribution to some limiting random variable that can be identified via a maximization procedure on a limiting infinite random structure. Interestingly, this limiting random variable has also appeared as the weak limit of the re-centered graph diameter of the barely supercritical Erdos-Renyi random graph in [21].", "Consider the minimal weights of paths between two points in a complete graph Kn with random weights on the edges, the weights being, for instance, uniformly distributed. It is shown that, asymptotically, this is log n n for two given points, that the maximum if one point is fixed and the other varies is 2 log n n, and that the maximum over all pairs of points is 3 log n n.Some further related results are given as well, including results on asymptotic distributions and moments, and on the number of edges in the minimal weight paths." ] }
1504.06840
2146999606
Let D(n,r) be a random r-out regular directed multigraph on the set of vertices 1,...,n . In this work, we establish that for every r � 2, there existsr > 0 such that diam(D(n,r)) = (1 + �r + o(1))log r n. Our techniques also allow us to bound some extremal quantities related to the stationary distribution of a simple random walk on D(n,r). In particular, we determine the asymptotic behaviour ofmax andmin, the maximum and the minimum values of the stationary distribution. We show that with
To conclude this section, we discuss the stationary distribution of a simple random walk in these other models. While in undirected graphs the stationary distribution (if it exists) is completely determined by the degrees of the vertices, this is not the case in directed graphs. Cooper and Frieze @cite_10 give a very precise description of the stationary distribution of @math when @math , for any constant @math , and use their result to compute the cover time of @math . It is worth noticing that for such values of @math , both the in-degrees and out-degrees are of logarithmic order and concentrated around their expected values, which turns to be very useful for the analysis. It seems harder to find an interesting question about the stationary distribution of @math when @math since like in random @math -in regular digraphs, typically there are vertices with no out-edges.
{ "cite_N": [ "@cite_10" ], "mid": [ "2158978530" ], "abstract": [ "We study properties of a simple random walk on the random digraph D\"n\",\"p when np=dlogn, d>1. We prove that whp the value @p\"v of the stationary distribution at vertex v is asymptotic to deg^-(v) m where deg^-(v) is the in-degree of v and m=n(n-1)p is the expected number of edges of D\"n\",\"p. If d=d(n)-> with n, the stationary distribution is asymptotically uniform whp. Using this result we prove that, for d>1, whp the cover time of D\"n\",\"p is asymptotic to dlog(d (d-1))nlogn. If d=d(n)-> with n, then the cover time is asymptotic to nlogn." ] }
1504.06665
850463266
We present a parser for Abstract Meaning Representation (AMR). We treat English-to-AMR conversion within the framework of string-to-tree, syntax-based machine translation (SBMT). To make this work, we transform the AMR structure into a form suitable for the mechanics of SBMT and useful for modeling. We introduce an AMR-specific language model and add data and features drawn from semantic resources. Our resulting AMR parser improves upon state-of-the-art results by 7 Smatch points.
Several other recent works have used a machine translation approach to semantic parsing, but all have been applied to domain data that is much narrower and an order of magnitude smaller than that of AMR, primarily the Geoquery corpus @cite_23 . The WASP system of uses hierarchical SMT techniques and does not apply semantic-specific improvements. use phrase-based and hierarchical SMT techniques on Geoquery. Like this work, they perform a transformation of the input semantic representation so that it is amenable to use in an existing machine translation system. However, they are unable to reach the state of the art in performance. directly address GHKM's word-to-terminal alignment requirement by extending that algorithm to handle word-to-node alignment.
{ "cite_N": [ "@cite_23" ], "mid": [ "2163274265" ], "abstract": [ "This paper presents recent work using the CHILL parser acquisition system to automate the construction of a natural-language interface for database queries. CHILL treats parser acquisition as the learning of search-control rules within a logic program representing a shift-reduce parser and uses techniques from Inductive Logic Programming to learn relational control knowledge. Starting with a general framework for constructing a suitable logical form, CHILL is able to train on a corpus comprising sentences paired with database queries and induce parsers that map subsequent sentences directly into executable queries. Experimental results with a complete database-query application for U.S. geography show that CHILL is able to learn parsers that outperform a preexisting, hand-crafted counterpart. These results demonstrate the ability of a corpus-based system to produce more than purely syntactic representations. They also provide direct evidence of the utility of an empirical approach at the level of a complete natural language application." ] }
1504.06665
850463266
We present a parser for Abstract Meaning Representation (AMR). We treat English-to-AMR conversion within the framework of string-to-tree, syntax-based machine translation (SBMT). To make this work, we transform the AMR structure into a form suitable for the mechanics of SBMT and useful for modeling. We introduce an AMR-specific language model and add data and features drawn from semantic resources. Our resulting AMR parser improves upon state-of-the-art results by 7 Smatch points.
Earlier work on using machine translation techniques for semantic parsing includes that of . That work applies the IBM machine translation models @cite_14 to English paired with a non-structural formal language of air travel queries. In a similar vein, use IBM models and their Alignment Template approach to analyze a relatively large corpus of German train scheduling inquiries. The formal language they generate is not structural and has a vocabulary of less than 30 types; it may thus be seen as an instance of semantic role labeling rather than semantic representation parsing.
{ "cite_N": [ "@cite_14" ], "mid": [ "2006969979" ], "abstract": [ "We describe a series of five statistical models of the translation process and give algorithms for estimating the parameters of these models given a set of pairs of sentences that are translations of one another. We define a concept of word-by-word alignment between such pairs of sentences. For any given pair of such sentences each of our models assigns a probability to each of the possible word-by-word alignments. We give an algorithm for seeking the most probable of these alignments. Although the algorithm is suboptimal, the alignment thus obtained accounts well for the word-by-word relationships in the pair of sentences. We have a great deal of data in French and English from the proceedings of the Canadian Parliament. Accordingly, we have restricted our work to these two languages; but we feel that because our algorithms have minimal linguistic content they would work well on other pairs of languages. We also feel, again because of the minimal linguistic content of our algorithms, that it is reasonable to argue that word-by-word alignments are inherent in any sufficiently large bilingual corpus." ] }