aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1706.05477 | 2643184551 | Traditional GANs use a deterministic generator function (typically a neural network) to transform a random noise input @math to a sample @math that the discriminator seeks to distinguish. We propose a new GAN called Bayesian Conditional Generative Adversarial Networks (BC-GANs) that use a random generator function to transform a deterministic input @math to a sample @math . Our BC-GANs extend traditional GANs to a Bayesian framework, and naturally handle unsupervised learning, supervised learning, and semi-supervised learning problems. Experiments show that the proposed BC-GANs outperforms the state-of-the-arts. | Further, GANs have been successfully used for latent variable modeling and semi-supervised learning with the intuition that the generator assists the discriminator when the number of labelled instances are small. For instance, InfoGAN @cite_9 proposed to learn a latent variable that represents cluster of data while learning to generate images by utilizing variational inference. While it was not directly used for semi-supervised learning, its extension categorical GAN (CatGAN) @cite_11 that utilized mutual information as part of its loss has developed with very good performance. Furthermore, @cite_10 have developed heuristics for better training and achieving state of the art results in semi-supervised learning using GANs. | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_11"
],
"mid": [
"",
"2963373786",
"2178768799"
],
"abstract": [
"",
"We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.",
"In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model. The resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks (GAN) framework or as an extension of the regularized information maximization (RIM) framework to robust classification against an optimal adversary. We empirically evaluate our method - which we dub categorical generative adversarial networks (or CatGAN) - on synthetic data as well as on challenging image classification tasks, demonstrating the robustness of the learned classifiers. We further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier, and identify links between the CatGAN objective and discriminative clustering algorithms (such as RIM)."
]
} |
1706.05477 | 2643184551 | Traditional GANs use a deterministic generator function (typically a neural network) to transform a random noise input @math to a sample @math that the discriminator seeks to distinguish. We propose a new GAN called Bayesian Conditional Generative Adversarial Networks (BC-GANs) that use a random generator function to transform a deterministic input @math to a sample @math . Our BC-GANs extend traditional GANs to a Bayesian framework, and naturally handle unsupervised learning, supervised learning, and semi-supervised learning problems. Experiments show that the proposed BC-GANs outperforms the state-of-the-arts. | On the other hand, unlike our approach conditional GAN @cite_20 generate labelled images by adding the label vector to the input noise. Furthermore, MMD measure @cite_17 have been used in GANs with success @cite_7 . However previously, kernel function had to be explicitly defined and in our approach we learn it as part of the discriminator. | {
"cite_N": [
"@cite_7",
"@cite_20",
"@cite_17"
],
"mid": [
"2950863313",
"",
"2212660284"
],
"abstract": [
"We propose a method to optimize the representation and distinguishability of samples from two probability distributions, by maximizing the estimated power of a statistical test based on the maximum mean discrepancy (MMD). This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples. In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples. Second, the MMD can be used to evaluate the performance of a generative model, by testing the model's samples against a reference data set. In the latter role, the optimized MMD is particularly helpful, as it gives an interpretable indication of how the model and data distributions differ, even in cases where individual model samples are not easily distinguished either by eye or by classifier.",
"",
"We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD).We present two distribution free tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests."
]
} |
1706.05477 | 2643184551 | Traditional GANs use a deterministic generator function (typically a neural network) to transform a random noise input @math to a sample @math that the discriminator seeks to distinguish. We propose a new GAN called Bayesian Conditional Generative Adversarial Networks (BC-GANs) that use a random generator function to transform a deterministic input @math to a sample @math . Our BC-GANs extend traditional GANs to a Bayesian framework, and naturally handle unsupervised learning, supervised learning, and semi-supervised learning problems. Experiments show that the proposed BC-GANs outperforms the state-of-the-arts. | Use of Bayesian methods in GANs have generally been limited to combining variational autoencoders @cite_6 with GANs @cite_12 @cite_19 . We on the other hand take a Bayesian view on both discriminator and generator using the dropout approximation for variational inference. This allows us to develop a simpler and more intuitive model. | {
"cite_N": [
"@cite_19",
"@cite_12",
"@cite_6"
],
"mid": [
"2411541852",
"2202109488",
""
],
"abstract": [
"We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.",
"We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.",
""
]
} |
1706.05683 | 2674251584 | We propose Sparse Neural Network architectures that are based on random or structured bipartite graph topologies. Sparse architectures provide compression of the models learned and speed-ups of computations, they can also surpass their unstructured or fully connected counterparts. As we show, even more compact topologies of the so-called SNN (Sparse Neural Network) can be achieved with the use of structured graphs of connections between consecutive layers of neurons. In this paper, we investigate how the accuracy and training speed of the models depend on the topology and sparsity of the neural network. Previous approaches using sparcity are all based on fully connected neural network models and create sparcity during training phase, instead we explicitly define a sparse architectures of connections before the training. Building compact neural network models is coherent with empirical observations showing that there is much redundancy in learned neural network models. We show experimentally that the accuracy of the models learned with neural networks depends on expander-like properties of the underlying topologies such as the spectral gap and algebraic connectivity rather than the density of the graphs of connections. | It was recently observed @cite_17 , @cite_14 , @cite_12 that imposing specific structure of the linear embedding part of the neural network computations leads to significant speed-ups and memory usage reductions with only minimal loss of the quality of the model. In this approach, the graph of connections between consecutive neural network layers is still dense and the reduction comes from recycling a compact vector of learned weights across the entire matrix. In @cite_18 the authors use a compact multilinear format, called the to represent the dense weight matrix of the fully-connected layers. Even more basic technique relies on using the low-rank representation of the weight matrices. It was empirically tested that restricting the rank of the matrix of neurons connections to be of low rank does not affect much the quality of the model @cite_20 , @cite_3 , @cite_6 , @cite_16 . This setting is more restrictive than the one presented in @cite_17 , since low-rank matrices considered in these papers have in particular low displacement rank. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_3",
"@cite_6",
"@cite_16",
"@cite_12",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2949560654",
"2294543795",
"",
"2181101938",
"1704891789",
"",
"2949964376"
],
"abstract": [
"",
"We explore the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection. The circulant structure substantially reduces memory footprint and enables the use of the Fast Fourier Transform to speed up the computation. Considering a fully-connected neural network layer with d input nodes, and d output nodes, this method improves the time complexity from O(d^2) to O(dlogd) and space complexity from O(d^2) to O(d). The space savings are particularly important for modern deep convolutional neural network architectures, where fully-connected layers typically contain more than 90 of the network parameters. We further show that the gradient computation and optimization of the circulant projections can be performed very efficiently. Our experiments on three standard datasets show that the proposed approach achieves this significant gain in storage and efficiency with minimal increase in error rate compared to neural networks with unstructured projections.",
"Recently proposed deep neural network (DNN) obtains significant accuracy improvements in many large vocabulary continuous speech recognition (LVCSR) tasks. However, DNN requires much more parameters than traditional systems, which brings huge cost during online evaluation, and also limits the application of DNN in a lot of scenarios. In this paper we present our new effort on DNN aiming at reducing the model size while keeping the accuracy improvements. We apply singular value decomposition (SVD) on the weight matrices in DNN, and then restructure the model based on the inherent sparseness of the original matrices. After restructuring we can reduce the DNN model size significantly with negligible accuracy loss. We also fine-tune the restructured model using the regular back-propagation method to get the accuracy back when reducing the DNN model size heavily. The proposed method has been evaluated on two LVCSR tasks, with context-dependent DNN hidden Markov model (CD-DNN-HMM). Experimental results show that the proposed approach dramatically reduces the DNN model size by more than 80 without losing any accuracy. Index Terms: deep neural network, singular value decomposition, model restructuring",
"",
"Large CNNs have delivered impressive performance in various computer vision applications. But the storage and computation requirements make it problematic for deploying these models on mobile devices. Recently, tensor decompositions have been used for speeding up CNNs. In this paper, we further develop the tensor decomposition technique. We propose a new algorithm for computing the low-rank tensor decomposition for removing the redundancy in the convolution kernels. The algorithm finds the exact global optimizer of the decomposition and is more effective than iterative methods. Based on the decomposition, we further propose a new method for training low-rank constrained CNNs from scratch. Interestingly, while achieving a significant speedup, sometimes the low-rank constrained CNNs delivers significantly better performance than their non-constrained counterparts. On the CIFAR-10 dataset, the proposed low-rank NIN model achieves @math accuracy (without data augmentation), which also improves upon state-of-the-art result. We evaluated the proposed method on CIFAR-10 and ILSVRC12 datasets for a variety of modern CNNs, including AlexNet, NIN, VGG and GoogleNet with success. For example, the forward time of VGG-16 is reduced by half while the performance is still comparable. Empirical success suggests that low-rank tensor decompositions can be a very useful tool for speeding up large CNNs.",
"This paper presents results on the memory capacity of a generalized feedback neural network using a circulant matrix. Children are capable of learning soon after birth which indicates that the neural networks of the brain have prior learnt capacity that is a consequence of the regular structures in the brain's organization. Motivated by this idea, we consider the capacity of circulant matrices as weight matrices in a feedback network.",
"",
"We consider the task of building compact deep learning pipelines suitable for deployment on storage and power constrained mobile devices. We propose a unified framework to learn a broad family of structured parameter matrices that are characterized by the notion of low displacement rank. Our structured transforms admit fast function and gradient evaluation, and span a rich range of parameter sharing configurations whose statistical modeling capacity can be explicitly tuned along a continuum from structured to unstructured. Experimental results show that these transforms can significantly accelerate inference and forward backward passes during training, and offer superior accuracy-compactness-speed tradeoffs in comparison to a number of existing techniques. In keyword spotting applications in mobile speech recognition, our methods are much more effective than standard linear low-rank bottleneck layers and nearly retain the performance of state of the art models, while providing more than 3.5-fold compression."
]
} |
1706.05683 | 2674251584 | We propose Sparse Neural Network architectures that are based on random or structured bipartite graph topologies. Sparse architectures provide compression of the models learned and speed-ups of computations, they can also surpass their unstructured or fully connected counterparts. As we show, even more compact topologies of the so-called SNN (Sparse Neural Network) can be achieved with the use of structured graphs of connections between consecutive layers of neurons. In this paper, we investigate how the accuracy and training speed of the models depend on the topology and sparsity of the neural network. Previous approaches using sparcity are all based on fully connected neural network models and create sparcity during training phase, instead we explicitly define a sparse architectures of connections before the training. Building compact neural network models is coherent with empirical observations showing that there is much redundancy in learned neural network models. We show experimentally that the accuracy of the models learned with neural networks depends on expander-like properties of the underlying topologies such as the spectral gap and algebraic connectivity rather than the density of the graphs of connections. | Other sets of techniques, such as methods exploting the so-called architectures, incorporate specific hashing tricks based on easily-computable hash functions to group weights connections into hash buckets. This procedure significantly reduces the size of the entire model. Some other methods involve applying certain clustering and quantization techniques to fully-connected layers @cite_10 that, as reported, have several advantages over existing matrix factorization methods. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1724438581"
],
"abstract": [
"Deep convolutional neural networks (CNN) has become the most promising method for object recognition, repeatedly demonstrating record breaking results for image classification and object detection in recent years. However, a very deep CNN generally involves many layers with millions of parameters, making the storage of the network model to be extremely large. This prohibits the usage of deep CNNs on resource limited hardware, especially cell phones or other embedded devices. In this paper, we tackle this model storage issue by investigating information theoretical vector quantization methods for compressing the parameters of CNNs. In particular, we have found in terms of compressing the most storage demanding dense connected layers, vector quantization methods have a clear gain over existing matrix factorization methods. Simply applying k-means clustering to the weights or conducting product quantization can lead to a very good balance between model size and recognition accuracy. For the 1000-category classification task in the ImageNet challenge, we are able to achieve 16-24 times compression of the network with only 1 loss of classification accuracy using the state-of-the-art CNN."
]
} |
1706.05431 | 2702325171 | In a distributed storage system, recovering from multiple failures is a critical and frequent task that is crucial for maintaining the system's reliability and fault-tolerance. In this work, we focus on the problem of repairing multiple failures in a centralized way, which can be desirable in many data storage configurations, and we show that a significant repair traffic reduction is possible. First, the fundamental tradeoff between the repair bandwidth and the storage size for functional repair is established. Using a graph-theoretic formulation, the optimal tradeoff is identified as the solution to an integer optimization problem, for which a closed-form expression is derived. Expressions of the extreme points, namely the minimum storage multi-node repair (MSMR) and minimum bandwidth multi-node repair (MBMR) points, are obtained. Second, we describe a general framework for converting single erasure minimum storage regenerating codes to MSMR codes. The repair strategy for @math failures is similar to that for single failure, however certain extra requirements need to be satisfied by the repairing functions for single failure. For illustration, the framework is applied to product-matrix codes and interference alignment codes. Furthermore, we prove that the functional MBMR point is not achievable for linear exact repair codes. We also show that exact-repair minimum bandwidth cooperative repair (MBCR) codes achieve an interior point, that lies near the MBMR point, when @math , @math being the minimum number of nodes needed to reconstruct the entire data. Finally, for @math and @math , where @math is the number of helper nodes during repair, we show that the functional repair tradeoff is not achievable under exact repair, except for maybe a small portion near the MSMR point, which parallels the results for single erasure repair by | Cooperative regenerating codes (also known as coordinated regenerating codes) have been studied to address the repair of multiple erasures @cite_12 @cite_23 in a distributed manner. In this framework, each replacement node downloads information from @math helpers in the first stage. Then, the replacement nodes exchange information between themselves before regenerating the lost nodes. Cooperative regenerating codes that achieve the extreme points on the cooperative tradeoff have been developed; namely, minimum storage cooperative regenerating (MSCR) codes @cite_29 @cite_23 @cite_11 and minimum bandwidth cooperative regeneration (MBCR) codes @cite_48 . | {
"cite_N": [
"@cite_48",
"@cite_29",
"@cite_23",
"@cite_12",
"@cite_11"
],
"mid": [
"2063987864",
"2052678402",
"2166589659",
"2134191435",
"2949307695"
],
"abstract": [
"We give an explicit construction of exact cooperative regenerating codes at the MBCR (minimum bandwidth cooperative regeneration) point. Before the paper, the only known explicit MBCR codes are given with parameters n = d + r and d = k, while our construction applies to all possible values of n, k, d, r. The code has a brief expression in the polynomial form and the data reconstruction is accomplished by bivariate polynomial interpolation. It is a scalar code and operates over a finite field of size q ≥ n. Besides, we establish several subspace properties for linear exact MBCR codes. Based on these properties we prove that linear exact MBCR codes cannot achieve repair-by-transfer.",
"",
"One of the design objectives in distributed storage system is the minimization of the data traffic during the repair of failed storage nodes. By repairing multiple failures simultaneously and cooperatively rather than successively and independently, further reduction of repair traffic is made possible. A closed-form expression of the optimal tradeoff between the repair traffic and the amount of storage in each node for cooperative repair is given. We show that the points on the tradeoff curve can be achieved by linear cooperative regenerating codes, with an explicit bound on the required finite-field size. The proof relies on a max-flow-min-cut-type theorem from combinatorial optimization for submodular flows. Two families of explicit constructions are given.",
"Erasure correcting codes are widely used to ensure data persistence in distributed storage systems. This paper addresses the simultaneous repair of multiple failure in such codes. We go beyond existing work (i.e., regenerating codes by ) and propose coordinated regenerating codes allowing devices to coordinate during simultaneous repairs thus further reducing the costs. We define optimal coordinated regenerating codes outperforming existing codes for simultaneous repairs with respect to both storage and repair costs. We prove that deliberately delaying repairs does not bring additional gains (i.e., regenerating codes are optimal as long as each failure can be repaired before a second one occurs). Finally, we propose adaptive regenerating codes that self-adapt to the system state and prove they are optimal.",
"Two widely studied models of multiple-node repair in distributed storage systems are centralized repair and cooperative repair. The centralized model assumes that all the failed nodes are recreated in one location, while the cooperative one stipulates that the failed nodes may communicate but are distinct, and the amount of data exchanged between them is included in the repair bandwidth. We show that cooperative model is stronger than the centralized one, in the sense that MDS codes with optimal repair bandwidth under the former model have the same property under the latter one. In this paper we present explicit constructions of MDS codes with optimal cooperative repair for all possible parameters. More precisely, given any @math such that @math we construct explicit @math MDS codes over the field @math of size @math that can optimally repair any @math erasures from any @math helper nodes. The repair scheme of our codes involves two rounds of communication. In the first round, each failed node downloads information from the helper nodes, and in the second one, each failed node downloads additional information from the other failed nodes. This shows that two rounds of communication in cooperative repair always suffice, and that the proposed codes achieve the optimal repair bandwidth using the smallest possible number of rounds."
]
} |
1706.05431 | 2702325171 | In a distributed storage system, recovering from multiple failures is a critical and frequent task that is crucial for maintaining the system's reliability and fault-tolerance. In this work, we focus on the problem of repairing multiple failures in a centralized way, which can be desirable in many data storage configurations, and we show that a significant repair traffic reduction is possible. First, the fundamental tradeoff between the repair bandwidth and the storage size for functional repair is established. Using a graph-theoretic formulation, the optimal tradeoff is identified as the solution to an integer optimization problem, for which a closed-form expression is derived. Expressions of the extreme points, namely the minimum storage multi-node repair (MSMR) and minimum bandwidth multi-node repair (MBMR) points, are obtained. Second, we describe a general framework for converting single erasure minimum storage regenerating codes to MSMR codes. The repair strategy for @math failures is similar to that for single failure, however certain extra requirements need to be satisfied by the repairing functions for single failure. For illustration, the framework is applied to product-matrix codes and interference alignment codes. Furthermore, we prove that the functional MBMR point is not achievable for linear exact repair codes. We also show that exact-repair minimum bandwidth cooperative repair (MBCR) codes achieve an interior point, that lies near the MBMR point, when @math , @math being the minimum number of nodes needed to reconstruct the entire data. Finally, for @math and @math , where @math is the number of helper nodes during repair, we show that the functional repair tradeoff is not achievable under exact repair, except for maybe a small portion near the MSMR point, which parallels the results for single erasure repair by | The number of nodes involved in the repair of a single node, known as locality, is another important measure of node repair efficiency @cite_41 . Various bounds and code constructions have been proposed in the literature @cite_41 @cite_19 . Recent works have investigated the problem of multiple node repair under locality constraints @cite_21 @cite_6 . | {
"cite_N": [
"@cite_41",
"@cite_19",
"@cite_21",
"@cite_6"
],
"mid": [
"1993830711",
"1997044393",
"2951572681",
"2249486791"
],
"abstract": [
"Consider a linear [n,k,d]q code C. We say that the ith coordinate of C has locality r , if the value at this coordinate can be recovered from accessing some other r coordinates of C. Data storage applications require codes with small redundancy, low locality for information coordinates, large distance, and low locality for parity coordinates. In this paper, we carry out an in-depth study of the relations between these parameters. We establish a tight bound for the redundancy n-k in terms of the message length, the distance, and the locality of information coordinates. We refer to codes attaining the bound as optimal. We prove some structure theorems about optimal codes, which are particularly strong for small distances. This gives a fairly complete picture of the tradeoffs between codewords length, worst case distance, and locality of information symbols. We then consider the locality of parity check symbols and erasure correction beyond worst case distance for optimal codes. Using our structure theorem, we obtain a tight bound for the locality of parity symbols possible in such codes for a broad class of parameter settings. We prove that there is a tradeoff between having good locality and the ability to correct erasures beyond the minimum distance.",
"A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most r ) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter r is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over r points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data (“hot data”).",
"In this paper, we study codes with locality that can recover from two erasures via a sequence of two local, parity-check computations. By a local parity-check computation, we mean recovery via a single parity-check equation associated to small Hamming weight. Earlier approaches considered recovery in parallel; the sequential approach allows us to potentially construct codes with improved minimum distance. These codes, which we refer to as locally 2-reconstructible codes, are a natural generalization along one direction, of codes with all-symbol locality introduced by Gopalan , in which recovery from a single erasure is considered. By studying the Generalized Hamming Weights of the dual code, we derive upper bounds on the minimum distance of locally 2-reconstructible codes and provide constructions for a family of codes based on Tur 'an graphs, that are optimal with respect to this bound. The minimum distance bound derived here is universal in the sense that no code which permits all-symbol local recovery from @math erasures can have larger minimum distance regardless of approach adopted. Our approach also leads to a new bound on the minimum distance of codes with all-symbol locality for the single-erasure case.",
"We consider the problem of designing [n; k] linear codes for distributed storage systems (DSS) that satisfy the (r, t)-Local Repair Property, where any t'(<=t) simultaneously failed nodes can be locally repaired, each with locality r. The parameters n, k, r, t are positive integers such that r<k<n and t <= n-k. We consider the functional repair model and the sequential approach for repairing multiple failed nodes. By functional repair, we mean that the packet stored in each newcomer is not necessarily an exact copy of the lost data but a symbol that keep the (r, t)-local repair property. By the sequential approach, we mean that the t' newcomers are ordered in a proper sequence such that each newcomer can be repaired from the live nodes and the newcomers that are ordered before it. Such codes, which we refer to as (n, k, r, t)-functional locally repairable codes (FLRC), are the most general class of LRCs and contain several subclasses of LRCs reported in the literature. In this paper, we aim to optimize the storage overhead (equivalently, the code rate) of FLRCs. We derive a lower bound on the code length n given t belongs to 2,3 and any possible k, r. For t=2, our bound generalizes the rate bound proved in [14]. For t=3, our bound improves the rate bound proved in [10]. We also give some onstructions of exact LRCs for t belongs to 2,3 whose length n achieves the bound of (n, k, r, t)-FLRC, which proves the tightness of our bounds and also implies that there is no gap between the optimal code length of functional LRCs and exact LRCs for certain sets of parameters. Moreover, our constructions are over the binary field, hence are of interest in practice."
]
} |
1706.05476 | 2666247594 | Graph similarity search is a common and fundamental operation in graph databases. One of the most popular graph similarity measures is the Graph Edit Distance (GED) mainly because of its broad applicability and high interpretability. Despite its prevalence, exact GED computation is proved to be NP-hard, which could result in unsatisfactory computational efficiency on large graphs. However, exactly accurate search results are usually unnecessary for real-world applications especially when the responsiveness is far more important than the accuracy. Thus, in this paper, we propose a novel probabilistic approach to efficiently estimate GED, which is further leveraged for the graph similarity search. Specifically, we first take branches as elementary structures in graphs, and introduce a novel graph similarity measure by comparing branches between graphs, i.e., Graph Branch Distance (GBD), which can be efficiently calculated in polynomial time. Then, we formulate the relationship between GED and GBD by considering branch variations as the result ascribed to graph edit operations, and model this process by probabilistic approaches. By applying our model, the GED between any two graphs can be efficiently estimated by their GBD, and these estimations are finally utilized in the graph similarity search. Extensive experiments show that our approach has better accuracy, efficiency and scalability than other comparable methods in the graph similarity search over real and synthetic data sets. | The state-of-the-art method for exact GED computation is the @math algorithm @cite_19 and its variant @cite_18 , whose time costs are exponential with respect to graph sizes @cite_1 . To address this NP-hardness problem, most graph similarity search method are based on the filter-and-verification framework @cite_1 @cite_8 @cite_16 @cite_2 , which first filters out undesirable graphs from the graph databases and then only verifies the remaining candidates. A common filtering approach is to use the distance between sub-structures of two graphs as a lower bound of their GED, which includes tree-based @cite_8 , path-based @cite_7 , branch-based @cite_2 and partition-based @cite_32 approaches. In this paper, we adopt the branch structure @cite_2 to build our model. However, we re-define the distance between branches, since the original definition @cite_2 of branch distances requires @math time for computation while ours only requires @math time. In addition, a recent paper @cite_21 propose a multi-layer indexing approach to accelerate the filtering process based on their proposed partition-based filtering method. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_19",
"@cite_2",
"@cite_16"
],
"mid": [
"",
"2056899820",
"2164041127",
"",
"2032338144",
"984076445",
"1969483458",
"1989135657",
""
],
"abstract": [
"",
"Graphs are widely used to model complicated data semantics in many applications in bioinformatics, chemistry, social networks, pattern recognition, etc. A recent trend is to tolerate noise arising from various sources such as erroneous data entries and find similarity matches. In this paper, we study graph similarity queries with edit distance constraints. Inspired by the @math -gram idea for string similarity problems, our solution extracts paths from graphs as features for indexing. We establish a lower bound of common features to generate candidates. Efficient algorithms are proposed to handle three types of graph similarity queries by exploiting both matching and mismatching features as well as degree information to improve the filtering and verification on candidates. We demonstrate the proposed algorithms significantly outperform existing approaches with extensive experiments on real and synthetic datasets.",
"The graph structure is a very important means to model schemaless data with complicated structures, such as protein-protein interaction networks, chemical compounds, knowledge query inferring systems, and road networks. This paper focuses on the index structure for similarity search on a set of large sparse graphs and proposes an efficient indexing mechanism by introducing the Q-Gram idea. By decomposing graphs to small grams (organized by κ-Adjacent Tree patterns) and pairing-up on those κ-Adjacent Tree patterns, the lower bound estimation of their edit distance can be calculated for candidate filtering. Furthermore, we have developed a series of techniques for inverted index construction and online query processing. By building the candidate set for the query graph before the exact edit distance calculation, the number of graphs need to proceed into exact matching can be greatly reduced. Extensive experiments on real and synthetic data sets have been conducted to show the effectiveness and efficiency of the proposed indexing mechanism.",
"",
"Graph data have become ubiquitous and manipulating them based on similarity is essential for many applications. Graph edit distance is one of the most widely accepted measures to determine similarities between graphs and has extensive applications in the fields of pattern recognition, computer vision etc. Unfortunately, the problem of graph edit distance computation is NP-Hard in general. Accordingly, in this paper we introduce three novel methods to compute the upper and lower bounds for the edit distance between two graphs in polynomial time. Applying these methods, two algorithms AppFull and AppSub are introduced to perform different kinds of graph search on graph databases. Comprehensive experimental studies are conducted on both real and synthetic datasets to examine various aspects of the methods for bounding graph edit distance. Result shows that these methods achieve good scalability in terms of both the number of graphs and the size of graphs. The effectiveness of these algorithms also confirms the usefulness of using our bounds in filtering and searching of graphs.",
"Graphs are widely used to model complex data in many applications, such as bioinformatics, chemistry, social networks, pattern recognition, etc. A fundamental and critical query primitive is to efficiently search similar structures in a large collection of graphs. This paper studies the graph similarity queries with edit distance constraints. Existing solutions to the problem utilize fixed-size overlapping substructures to generate candidates, and thus become susceptible to large vertex degrees or large distance thresholds. In this paper, we present a partition-based approach to tackle the problem. By dividing data graphs into variable-size non-overlapping partitions, the edit distance constraint is converted to a graph containment constraint for candidate generation. We develop efficient query processing algorithms based on the new paradigm. A candidate pruning technique and an improved graph edit distance algorithm are also developed to further boost the performance. In addition, a cost-aware graph partitioning technique is devised to optimize the index. Extensive experiments demonstrate our approach significantly outperforms existing approaches.",
"Although the problem of determining the minimum cost path through a graph arises naturally in a number of interesting applications, there has been no underlying theory to guide the development of efficient search procedures. Moreover, there is no adequate conceptual framework within which the various ad hoc search strategies proposed to date can be compared. This paper describes how heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching and demonstrates an optimality property of a class of search strategies.",
"Due to many real applications of graph databases, it has become increasingly important to retrieve graphs g (in graph database D) that approximately match with query graph q, rather than exact subgraph matches. In this paper, we study the problem of graph similarity search, which retrieves graphs that are similar to a given query graph under the constraint of the minimum edit distance. Specifically, we derive a lower bound, branch-based bound, which can greatly reduce the search space of the graph similarity search. We also propose a tree index structure, namely b-tree, to facilitate effective pruning and efficient query processing. Extensive experiments confirm that our proposed approach outperforms the existing approaches by orders of magnitude, in terms of both pruning power and query response time.",
""
]
} |
1706.05749 | 2712379154 | This paper introduces Dex, a reinforcement learning environment toolkit specialized for training and evaluation of continual learning methods as well as general reinforcement learning problems. We also present the novel continual learning method of incremental learning, where a challenging environment is solved using optimal weight initialization learned from first solving a similar easier environment. We show that incremental learning can produce vastly superior results than standard methods by providing a strong baseline method across ten Dex environments. We finally develop a saliency method for qualitative analysis of reinforcement learning, which shows the impact incremental learning has on network attention. | Transfer learning @cite_19 is the method of utilizing data from one domain to enhance learning of another domain. While sharing significant similarities to continual learning, transfer learning is applicable across all machine learning domains, rather than being confined to reinforcement learning. For example, it has had significant use with using networks trained on ImageNet @cite_8 to accelerate or enhance learning and classification accuracy by finetuning in a variety of vision tasks @cite_0 . | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_8"
],
"mid": [
"2161381512",
"2165698076",
"2108598243"
],
"abstract": [
"Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.",
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond."
]
} |
1706.05394 | 2682189153 | We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization. | Another direction we investigate is the relationship between regularization and memorization. argue that explicit and implicit regularizers (including SGD) might not explain or limit shattering of random data. In this work we show that regularizers (especially dropout) do control the at which DNNs memorize. This is interesting since dropout is also known to prevent catastrophic forgetting @cite_3 and thus in general it seems to help DNNs retain patterns. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2113839990"
],
"abstract": [
"Catastrophic forgetting is a problem faced by many machine learning models and algorithms. When trained on one task, then trained on a second task, many machine learning models \"forget\" how to perform the first task. This is widely believed to be a serious problem for neural networks. Here, we investigate the extent to which the catastrophic forgetting problem occurs for modern neural networks, comparing both established and recent gradient-based training algorithms and activation functions. We also examine the effect of the relationship between the first task and the second task on catastrophic forgetting. We find that it is always best to train using the dropout algorithm--the dropout algorithm is consistently best at adapting to the new task, remembering the old task, and has the best tradeoff curve between these two extremes. We find that different tasks and relationships between tasks result in very different rankings of activation function performance. This suggests the choice of activation function should always be cross-validated."
]
} |
1706.05412 | 2630548186 | This paper studies the problem of enumerating all maximal collinear subsets of size at least three in a given set of @math points. An algorithm for this problem, besides solving degeneracy testing and the exact fitting problem, can also help with other problems, such as point line cover and general position subset selection. The classic algorithm of Edelsbrunner and Guibas can find these subsets in @math time in the dual plane. We present an alternative algorithm that, although asymptotically slower than their algorithm in the worst case, is simpler to implement and more amenable to parallelization. If the input points are decomposed into @math convex polygons, our algorithm has time complexity @math and space complexity @math . Our algorithm can be parallelized on the CREW PRAM with time complexity @math using @math processors. | The well-known algorithm of Edelsbrunner and Guibas @cite_18 can sweep arrangements without constructing them, thus reducing its space complexity to @math . The time complexity matches the lower-bound for the time complexity of the problem of testing degeneracy and is thus the best possible. The algorithm, however, is rather difficult to implement, especially since input points are not in general position (see for instance @cite_8 ). Note that reporting line intersections in the dual plane is not enough for finding collinear points. These intersections should also be ordered so that all lines that intersect can be reported at once efficiently, making intersection reporting algorithms (for instance @cite_16 ) inefficient for this problem. | {
"cite_N": [
"@cite_18",
"@cite_16",
"@cite_8"
],
"mid": [
"2038730428",
"",
"1514117420"
],
"abstract": [
"Abstract Sweeping a collection of figures in the Euclidean plane with a straight line is one of the novel algorithmic paradigms that have emerged in the field of computational geometry. In this paper we demonstrate the advantages of sweeping with a topological line that is not necessarily straight. We show how an arrangement of n lines in the plane can be swept over in O ( n 2 ) time and O(n) space by a such a line. In the process each element, i.e., vertex, edge, or region, is visited once in a consistent ordering. Our technique makes use of novel data structures which exhibit interesting amortized complexity behavior; the result is an algorithm that improves upon all its predecessors either in the space or the time bounds, as well as being eminently practical. Numerous applications of the technique to problems in computational geometry are given—many through the use of duality transforms. Examples include solving visibility problems, detecting degeneracies in configurations, computing the extremal shadows of convex polytopes, and others. Even though our basic technique solves a planar problem, its applications include several problems in higher dimensions.",
"",
"Topological sweep can contribute to efficient implementations of various algorithms for data analysis. Real data, however, has degeneracies. The modification of the topological sweep algorithm presented here handles degenerate cases such as parallel or multiply concurrent lines without requiring numerical perturbations to achieve general position. Our method maintains the O(n2) and O(n) time and space complexities of the original algorithm, and is robust and easy to implement. We present experimental results."
]
} |
1706.05412 | 2630548186 | This paper studies the problem of enumerating all maximal collinear subsets of size at least three in a given set of @math points. An algorithm for this problem, besides solving degeneracy testing and the exact fitting problem, can also help with other problems, such as point line cover and general position subset selection. The classic algorithm of Edelsbrunner and Guibas can find these subsets in @math time in the dual plane. We present an alternative algorithm that, although asymptotically slower than their algorithm in the worst case, is simpler to implement and more amenable to parallelization. If the input points are decomposed into @math convex polygons, our algorithm has time complexity @math and space complexity @math . Our algorithm can be parallelized on the CREW PRAM with time complexity @math using @math processors. | Because topological sweeping is inherently sequential and thus unsuitable for parallel execution, parallel algorithms for sweeping or constructing arrangements have appeared in the literature. @cite_10 presented a CREW PRAM algorithm for constructing arrangements in @math time with @math processors and Goodrich @cite_17 presented an algorithm (also on the CREW PRAM) with the same goal with time complexity @math with @math processors. Since these algorithms construct the arrangement, they have space complexity @math , which is more than the @math space complexity of our parallel algorithm. Our algorithm also beats Goodrich's in time complexity, besides being easier to code since it uses only simple data structures. | {
"cite_N": [
"@cite_10",
"@cite_17"
],
"mid": [
"2142561263",
"2007933697"
],
"abstract": [
"We give the first efficient parallel algorithms for solving the arrangement problem. We give a deterministic algorithm for the CREW PRAM which runs in nearly optimal bounds ofO (logn log*n) time andn2 logn processors. We generalize this to obtain anO (logn log*n)-time algorithm usingnd logn processors for solving the problem ind dimensions. We also give a randomized algorithm for the EREW PRAM that constructs an arrangement ofn lines on-line, in which each insertion is done in optimalO (logn) time usingn logn processors. Our algorithms develop new parallel data structures and new methods for traversing an arrangement.",
"We give two optimal parallel algorithms for constructing the arrangement ofn lines in the plane. The first nethod is quite simple and runs inO(log2n) time usingO(n2) work, and the second method, which is more sophisticated, runs inO(logn) time usingO(n2) work. This second result solves a well-known open problem in parallel computational geometry, and involves the use of a new algorithmic technique, the construction of an ?-pseudocutting. Our results immediately imply that the arrangement ofn hyperplanes in ?d inO(logn) time usingO(nd) work, for fixedd, can be optimally constructed. Our algorithms are for the CREW PRAM."
]
} |
1706.05412 | 2630548186 | This paper studies the problem of enumerating all maximal collinear subsets of size at least three in a given set of @math points. An algorithm for this problem, besides solving degeneracy testing and the exact fitting problem, can also help with other problems, such as point line cover and general position subset selection. The classic algorithm of Edelsbrunner and Guibas can find these subsets in @math time in the dual plane. We present an alternative algorithm that, although asymptotically slower than their algorithm in the worst case, is simpler to implement and more amenable to parallelization. If the input points are decomposed into @math convex polygons, our algorithm has time complexity @math and space complexity @math . Our algorithm can be parallelized on the CREW PRAM with time complexity @math using @math processors. | Other methods have been presented to partition the arrangement into smaller regions with fewer points and use the sequential algorithm for sweeping these regions in parallel (e.g. @cite_1 and @cite_14 ). They are usually based on the assumption that the points are in general position. Such partitions yield poor performance, when this assumption is violated as it is the case in the problem of finding collinear points. Similar parallel sweeping methods have been presented for specific applications (e.g., rectangle intersection @cite_9 and hidden surface elimination @cite_20 ), in which there are fewer events than the number of intersections in the worst case (for finding collinear subsets @math ). More recently, presented a plane sweep algorithm that divides the plane into vertical slabs perpendicular to the sweep line, which has time complexity @math with @math processors for @math intersections @cite_0 , whose cost is more than our algorithm. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_0",
"@cite_20"
],
"mid": [
"2022464277",
"1997248273",
"2010584162",
"2555802243",
"2121315472"
],
"abstract": [
"We propose the jirst optimal parallel algorithm computing arrangements of hyperplanes in Ed (d 2 2). The algorithm is randomized and computes the arrangement of n hyperplanes within expected logarithmic time on a CRCW-PRAM with O(nd log n) processors.",
"Parallel algorithms used in Very Large Scale Integration physical design bring significant challenges for their efficient and effective design and implementation. The rectangle intersection problem is a subset of the plane sweep problem, a topic of computational geometry and a component in design rule checking, parasitic resistance-capacitance extraction, and mask processing flows. A variant of a plane sweep algorithm that is embarrassingly parallel and therefore easily scalable on multicore machines and clusters, while exceeding the best-known parallel plane sweep algorithms on real-world tests, is presented in this letter.",
"In this paper we consider the following problem: Given a set ? ofn lines in the plane, partition the plane intoO(r2) triangles so that no triangle meets more thanO(n r) lines of ?. We present a deterministic algorithm for this problem withO(nr logn r) running time, where? is a constant <3.33.",
"The plane sweep algorithm is a foundational algorithm for many geometric and spatial computations; thus, improvements in the algorithm have far reaching effects in many applications. In this paper, we examine the performance of the serial plane sweep algorithm, and introduce a parallelization technique for the algorithm that is suitable to multi-core computers. The parallelization technique is described in detail and shown to be correct. Finally, experiments are performed using multiple data sets on computers with varying numbers of processing cores. We show that our algorithm achieves significant speedups over the serial plane sweep algorithm using a wide range of input parameters; thus, our algorithm achieves good performance without the need to tune the input parameters for specific input cases.",
"In this paper we give efficient parallel algorithms for a number of problems from computational geometry by using versions of parallel plane sweeping. We illustrate our approach with a number of applications, which include:General hidden-surface elimination (even if the overlap relation contains cycles).CSG boundary evaluation.Computing the contour of a collection of rectangles.Hidden-surface elimination for rectangles. There are interesting subproblems that we solve as a part of each parallelization. For example, we give an optimal parallel method for building a data structure for line-stabbing queries (which, incidentally, improves the sequential complexity of this problem). Our algorithms are for the CREW PRAM, unless otherwise noted."
]
} |
1706.05170 | 2624894981 | This paper proposes the idea of using a generative adversarial network (GAN) to assist a novice user in designing real-world shapes with a simple interface. The user edits a voxel grid with a painting interface (like Minecraft). Yet, at any time, he she can execute a SNAP command, which projects the current voxel grid onto a latent shape manifold with a learned projection operator and then generates a similar, but more realistic, shape using a learned generator network. Then the user can edit the resulting shape and snap again until he she is satisfied with the result. The main advantage of this approach is that the projection and generation operators assist novice users to create 3D models characteristic of a background distribution of object shapes, but without having to specify all the details. The core new research idea is to use a GAN to support this application. 3D GANs have previously been used for shape generation, interpolation, and completion, but never for interactive modeling. The new challenge for this application is to learn a projection operator that takes an arbitrary 3D voxel model and produces a latent vector on the shape manifold from which a similar and realistic shape can be generated. We develop algorithms for this and other steps of the SNAP processing pipeline and integrate them into a simple modeling tool. Experiments with these algorithms and tool suggest that GANs provide a promising approach to computer-assisted interactive modeling. | * 1mm Interactive 3D Modeling for Novices: Most interactive modeling tools are designed for experts (e.g., Maya @cite_30 ) and are too difficult to use for casual, novice users. To address this issue, several researchers have proposed simpler interaction techniques for specifying 3D shapes, including ones based on sketching curves @cite_2 , making gestures @cite_28 , or sculpting volumes @cite_7 . However, these interfaces are limited to creating simple objects, since every shape feature of the output must be specified explicitly by the user. | {
"cite_N": [
"@cite_30",
"@cite_28",
"@cite_7",
"@cite_2"
],
"mid": [
"",
"1978235861",
"2164261480",
"2026687385"
],
"abstract": [
"",
"Sketching communicates ideas rapidly through approximate visual images with low overhead (pencil and paper), no need for precision or specialized knowledge, and ease of low-level correction and revision. In contrast, most 3D computer modeling systems are good at generating arbitrary views of precise 3D models and support high-level editing and revision. The SKETCH application described in this paper attempts to combine the advantages of each in order to create an environment for rapidly conceptualizing and editing approximate 3D scenes. To achieve this, SKETCH uses simple non-photorealistic rendering and a purely gestural interface based on simplified line drawings of primitives that allows all operations to be specified within the 3D world.",
"We present a new interactive modeling technique based on the notion of sculpting a solid material. A sculpting tool is controlled by a 3D input device and the material is represented by voxel data; the tool acts by modifying the values in the voxel array, much as a \"paint\" program's \"paintbrush\" modifies bitmap values. The voxel data is converted to a polygonal surface using a \"marching-cubes\" algorithm; since the modifications to the voxel data are local, we accelerate this computation by an incremental algorithm and accelerate the display by using a special data structure for determining which polygons must be redrawn in a particular screen region. We provide a variety of tools: one that cuts away material, one that adds material, a \"sandpaper\" tool, a \"heat gun,\" etc. The technique provides an intuitive direct interaction, as if the user were working with clay or wax. The models created are free-form and may have complex topology; however, they are not precise, so the technique is appropriate for modeling a boulder or a tooth but not for modeling a crankshaft.",
"We present a sketching interface for quickly and easily designing freeform models such as stuffed animals and other rotund objects. The user draws several 2D freeform strokes interactively on the screen and the system automatically constructs plausible 3D polygonal surfaces. Our system supports several modeling operations, including the operation to construct a 3D polygonal surface from a 2D silhouette drawn by the user: it inflates the region surrounded by the silhouette making wide areas fat, and narrow areas thin. Teddy, our prototype system, is implemented as a Java™ program, and the mesh construction is done in real-time on a standard PC. Our informal user study showed that a first-time user typically masters the operations within 10 minutes, and can construct interesting 3D models within minutes."
]
} |
1706.05170 | 2624894981 | This paper proposes the idea of using a generative adversarial network (GAN) to assist a novice user in designing real-world shapes with a simple interface. The user edits a voxel grid with a painting interface (like Minecraft). Yet, at any time, he she can execute a SNAP command, which projects the current voxel grid onto a latent shape manifold with a learned projection operator and then generates a similar, but more realistic, shape using a learned generator network. Then the user can edit the resulting shape and snap again until he she is satisfied with the result. The main advantage of this approach is that the projection and generation operators assist novice users to create 3D models characteristic of a background distribution of object shapes, but without having to specify all the details. The core new research idea is to use a GAN to support this application. 3D GANs have previously been used for shape generation, interpolation, and completion, but never for interactive modeling. The new challenge for this application is to learn a projection operator that takes an arbitrary 3D voxel model and produces a latent vector on the shape manifold from which a similar and realistic shape can be generated. We develop algorithms for this and other steps of the SNAP processing pipeline and integrate them into a simple modeling tool. Experiments with these algorithms and tool suggest that GANs provide a promising approach to computer-assisted interactive modeling. | * 1mm 3D Synthesis Guided by Analysis: To address this issue, researchers have studied ways to utilize analysis of 3D structures to assist interactive modeling. In early work, @cite_20 proposed an "analyze-and-edit" to shape manipulation, where detected structures captured by wires are used to specify and constrain output models. More recent work has utilized analysis of part-based templates @cite_29 @cite_23 , stability @cite_13 , functionality @cite_26 , ergonomics @cite_17 , and other analyses to guide interactive manipulation. Most recently, @cite_32 used a CNN trained on un-deformed deformed shape pairs to synthesize a voxel flow for shape deformation. However, each of these previous works is targeted to a specific type of analysis, a specific type of edit, and or considers only one aspect of the design problem. We aim to generalize this approach by using a learned shape space to guide editing operations. | {
"cite_N": [
"@cite_26",
"@cite_29",
"@cite_32",
"@cite_23",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"2296395970",
"2066090933",
"",
"2025473316",
"1913254489",
"2052658453",
"1698529903"
],
"abstract": [
"We propose a novel approach to fabricating complex 3D shapes via physical deformation of simpler shapes. Our focus is on objects composed of a set of planar beams and joints, where the joints are thin parts of the object which temporarily become living hinges when heated, close to a fixed angle defined by the local shape, and then become rigid when cooled. We call this class of objects Meltables. We present a novel algorithm that computes an optimal joint sequence which approximates a 3D spline curve while satisfying fabrication constraints. This technique is used in an interactive Meltables design tool. We demonstrate a variety of Meltables, fabricated with both 3D-printing and standard PVC piping.",
"3D morphable models are low-dimensional parameterizations of 3D object classes which provide a powerful means of associating 3D geometry to 2D images. However, morphable models are currently generated from 3D scans, so for general object classes such as animals they are economically and practically infeasible. We show that, given a small amount of user interaction (little more than that required to build a conventional morphable model), there is enough information in a collection of 2D pictures of certain object classes to generate a full 3D morphable model, even in the absence of surface texture. The key restriction is that the object class should not be strongly articulated, and that a very rough rigid model should be provided as an initial estimate of the “mean shape.” The model representation is a linear combination of subdivision surfaces, which we fit to image silhouettes and any identifiable key points using a novel combined continuous-discrete optimization strategy. Results are demonstrated on several natural object classes, and show that models of rather high quality can be obtained from this limited information.",
"",
"Occlusion contours are a natural feature to draw when tracing an object in an image or when drawing an object. We investigate the development of 3D models from multi-stroke contour drawings with the help of a 3D template model that serves as a shape prior. The template is aligned and then deformed by our method to match the drawn contours. At the heart of this process is the need to provide good correspondences between points on the contours and vertices on the model, which we pose as an optimisation problem using a hidden Markov model. An alternating correspond-and-deform process then progressively deforms the 3D template to match the image contours. We demonstrate the method on a wide range of examples.",
"Recent advances in modeling tools enable non-expert users to synthesize novel shapes by assembling parts extracted from model databases. A major challenge for these tools is to provide users with relevant parts, which is especially difficult for large repositories with significant geometric variations. In this paper we analyze unorganized collections of 3D models to facilitate explorative shape synthesis by providing high-level feedback of possible synthesizable shapes. By jointly analyzing arrangements and shapes of parts across models, we hierarchically embed the models into low-dimensional spaces. The user can then use the parameterization to explore the existing models by clicking in different areas or by selecting groups to zoom on specific shape clusters. More importantly, any point in the embedded space can be lifted to an arrangement of parts to provide an abstracted view of possible shape variations. The abstraction can further be realized by appropriately deforming parts from neighboring models to produce synthesized geometry. Our experiments show that users can rapidly generate plausible and diverse shapes using our system, which also performs favorably with respect to previous modeling tools.",
"Man-made objects are largely dominated by a few typical features that carry special characteristics and engineered meanings. State-of-the-art deformation tools fall short at preserving such characteristic features and global structure. We introduce iWIRES, a novel approach based on the argument that man-made models can be distilled using a few special 1D wires and their mutual relations. We hypothesize that maintaining the properties of such a small number of wires allows preserving the defining characteristics of the entire object. We introduce an analyze-and-edit approach, where prior to editing, we perform a light-weight analysis of the input shape to extract a descriptive set of wires. Analyzing the individual and mutual properties of the wires, and augmenting them with geometric attributes makes them intelligent and ready to be manipulated. Editing the object by modifying the intelligent wires leads to a powerful editing framework that retains the original design intent and object characteristics. We show numerous results of manipulation of man-made shapes using our editing technique.",
"This paper examines the following question: given a collection of man-made shapes, e.g., chairs, can we effectively explore and rank the shapes with respect to a given human body—in terms of how well a candidate shape fits the specified human body? Answering this question requires identifying which shapes are more suitable for a prescribed body, and how to alter the input geometry to better fit the shapes to a given human body. The problem links physical proportions of the human body and its interaction with object geometry, which is often expressed as ergonomics guidelines. We present an interactive system that allows users to explore shapes using different avatar poses, while, at the same time providing interactive previews of how to alter the shapes to fit the user-specified body and pose. We achieve this by first constructing a fuzzy shape-to-body map from the ergonomic guidelines to multi-contacts geometric constraints; and then, proposing a novel contact-preserving deformation paradigm to realize a reshaping to adapt the input shape. We evaluate our method on collections of models from different categories and validate the results through a user study."
]
} |
1706.05170 | 2624894981 | This paper proposes the idea of using a generative adversarial network (GAN) to assist a novice user in designing real-world shapes with a simple interface. The user edits a voxel grid with a painting interface (like Minecraft). Yet, at any time, he she can execute a SNAP command, which projects the current voxel grid onto a latent shape manifold with a learned projection operator and then generates a similar, but more realistic, shape using a learned generator network. Then the user can edit the resulting shape and snap again until he she is satisfied with the result. The main advantage of this approach is that the projection and generation operators assist novice users to create 3D models characteristic of a background distribution of object shapes, but without having to specify all the details. The core new research idea is to use a GAN to support this application. 3D GANs have previously been used for shape generation, interpolation, and completion, but never for interactive modeling. The new challenge for this application is to learn a projection operator that takes an arbitrary 3D voxel model and produces a latent vector on the shape manifold from which a similar and realistic shape can be generated. We develop algorithms for this and other steps of the SNAP processing pipeline and integrate them into a simple modeling tool. Experiments with these algorithms and tool suggest that GANs provide a promising approach to computer-assisted interactive modeling. | * 1mm Learned 3D Shape Spaces: Early work on learning shape spaces for geometric modeling focused on smooth deformations between surfaces. For example, @cite_11 , @cite_21 , and others describe methods for interpolation between surfaces with consistent parameterizations. More recently, probabilistic models of part hierarchies @cite_10 @cite_8 and grammars of shape features @cite_27 have been learned from collections and used to assist synthesis of new shapes. However, these methods rely on specific hand-selected models and thus are not general to all types of shapes. | {
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_27",
"@cite_10",
"@cite_11"
],
"mid": [
"1915142102",
"2099011563",
"1995382327",
"2092773680",
"2154084374"
],
"abstract": [
"We present a method for joint analysis and synthesis of geometrically diverse 3D shape families. Our method first learns part-based templates such that an optimal set of fuzzy point and part correspondences is computed between the shapes of an input collection based on a probabilistic deformation model. In contrast to previous template-based approaches, the geometry and deformation parameters of our part-based templates are learned from scratch. Based on the estimated shape correspondence, our method also learns a probabilistic generative model that hierarchically captures statistical relationships of corresponding surface point positions and parts as well as their existence in the input shapes. A deep learning procedure is used to capture these hierarchical relationships. The resulting generative model is used to produce control point arrangements that drive shape synthesis by combining and deforming parts from the input collection. The generative model also yields compact shape descriptors that are used to perform fine-grained classification. Finally, it can be also coupled with the probabilistic deformation model to further improve shape correspondence. We provide qualitative and quantitative evaluations of our method for shape correspondence, segmentation, fine-grained classification and synthesis. Our experiments demonstrate superior correspondence and segmentation results than previous state-of-the-art approaches.",
"We develop a novel method for fitting high-resolution template meshes to detailed human body range scans with sparse 3D markers. We formulate an optimization problem in which the degrees of freedom are an affine transformation at each template vertex. The objective function is a weighted combination of three measures: proximity of transformed vertices to the range data, similarity between neighboring transformations, and proximity of sparse markers at corresponding locations on the template and target surface. We solve for the transformations with a non-linear optimizer, run at two resolutions to speed convergence. We demonstrate reconstruction and consistent parameterization of 250 human body models. With this parameterized set, we explore a variety of applications for human body modeling, including: morphing, texture transfer, statistical analysis of shape, model fitting from sparse markers, feature analysis to modify multiple correlated parameters (such as the weight and height of an individual), and transfer of surface detail and animation controls from a template to fitted models.",
"A shape grammar defines a procedural shape space containing a variety of models of the same class, e.g. buildings, trees, furniture, airplanes, bikes, etc. We present a framework that enables a user to interactively design a probability density function (pdf) over such a shape space and to sample models according to the designed pdf. First, we propose a user interface that enables a user to quickly provide preference scores for selected shapes and suggest sampling strategies to decide which models to present to the user to evaluate. Second, we propose a novel kernel function to encode the similarity between two procedural models. Third, we propose a framework to interpolate user preference scores by combining multiple techniques: function factorization, Gaussian process regression, autorelevance detection, and l1 regularization. Fourth, we modify the original grammars to generate models with a pdf proportional to the user preference scores. Finally, we provide evaluations of our user interface and framework parameters and a comparison to other exploratory modeling techniques using modeling tasks in five example shape spaces: furniture, low-rise buildings, skyscrapers, airplanes, and vegetation.",
"We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis.",
"We present a novel framework to treat shapes in the setting of Riemannian geometry. Shapes -- triangular meshes or more generally straight line graphs in Euclidean space -- are treated as points in a shape space. We introduce useful Riemannian metrics in this space to aid the user in design and modeling tasks, especially to explore the space of (approximately) isometric deformations of a given shape. Much of the work relies on an efficient algorithm to compute geodesics in shape spaces; to this end, we present a multi-resolution framework to solve the interpolation problem -- which amounts to solving a boundary value problem -- as well as the extrapolation problem -- an initial value problem -- in shape space. Based on these two operations, several classical concepts like parallel transport and the exponential map can be used in shape space to solve various geometric modeling and geometry processing tasks. Applications include shape morphing, shape deformation, deformation transfer, and intuitive shape exploration."
]
} |
1706.05170 | 2624894981 | This paper proposes the idea of using a generative adversarial network (GAN) to assist a novice user in designing real-world shapes with a simple interface. The user edits a voxel grid with a painting interface (like Minecraft). Yet, at any time, he she can execute a SNAP command, which projects the current voxel grid onto a latent shape manifold with a learned projection operator and then generates a similar, but more realistic, shape using a learned generator network. Then the user can edit the resulting shape and snap again until he she is satisfied with the result. The main advantage of this approach is that the projection and generation operators assist novice users to create 3D models characteristic of a background distribution of object shapes, but without having to specify all the details. The core new research idea is to use a GAN to support this application. 3D GANs have previously been used for shape generation, interpolation, and completion, but never for interactive modeling. The new challenge for this application is to learn a projection operator that takes an arbitrary 3D voxel model and produces a latent vector on the shape manifold from which a similar and realistic shape can be generated. We develop algorithms for this and other steps of the SNAP processing pipeline and integrate them into a simple modeling tool. Experiments with these algorithms and tool suggest that GANs provide a promising approach to computer-assisted interactive modeling. | * 1mm Learned Generative 3D Models: More recently, researchers have begun to learn 3D shape spaces for generative models of object classes using variational autoencoders @cite_0 @cite_9 @cite_22 and Generative Adversarial Networks @cite_3 . Generative models have been tried for sampling shapes from a distribution @cite_9 @cite_3 , shape completion @cite_25 , shape interpolation @cite_0 @cite_9 @cite_3 , classification @cite_0 @cite_3 , 2D-to-3D mapping @cite_9 @cite_15 @cite_3 , and deformations @cite_32 . 3D GANs in particular produce remarkable results in which shapes generated from random low-dimensional vectors demonstrate all the key structural elements of the learned semantic class @cite_3 . These models are an exciting new development, but are unsuitable for interactive shape editing since they can only synthesize a shape from a latent vector, not from an existing shape. We address that issue. | {
"cite_N": [
"@cite_22",
"@cite_9",
"@cite_32",
"@cite_3",
"@cite_0",
"@cite_15",
"@cite_25"
],
"mid": [
"2338532005",
"2335364074",
"",
"2949551726",
"2511691466",
"2469266052",
"2951755740"
],
"abstract": [
"With the advent of affordable depth sensors, 3D capture becomes more and more ubiquitous and already has made its way into commercial products. Yet, capturing the geometry or complete shapes of everyday objects using scanning devices (e.g. Kinect) still comes with several challenges that result in noise or even incomplete shapes.",
"What is a good vector representation of an object? We believe that it should be generative in 3D, in the sense that it can produce new 3D objects; as well as be predictable from 2D, in the sense that it can be perceived from 2D images. We propose a novel architecture, called the TL-embedding network, to learn an embedding space with these properties. The network consists of two components: (a) an autoencoder that ensures the representation is generative; and (b) a convolutional network that ensures the representation is predictable. This enables tackling a number of tasks including voxel prediction from 2D images and 3D model retrieval. Extensive experimental analysis demonstrates the usefulness and versatility of this embedding.",
"",
"We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.",
"When working with three-dimensional data, choice of representation is key. We explore voxel-based models, and present evidence for the viability of voxellated representations in applications including shape modeling and object classification. Our key contributions are methods for training voxel-based variational autoencoders, a user interface for exploring the latent space learned by the autoencoder, and a deep convolutional neural network architecture for object classification. We address challenges unique to voxel-based representations, and empirically evaluate our models on the ModelNet benchmark, where we demonstrate a 51.5 relative improvement in the state of the art for object classification.",
"A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.",
"3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks."
]
} |
1706.05083 | 2626944006 | This work presents a novel approach to jointly tackling Automatic Post-Editing (APE) and Word-Level Quality Estimation (QE) using ensembles of specialized Neural Machine Translation (NMT) systems. Word-level features which have proven effective for QE are included as input factors, expanding the representation of the original source and the machine translation hypothesis, which are used to generate an automatically post-edited hypothesis. We train a suite of NMT models which use different input representations, but share the same output space. These models are then ensembled together, and tuned for both the APE and the QE task. We thus attempt to connect the state-of-the-art approaches to APE and QE within a single framework. Our models achieve state-of-the-art results in both tasks, with the only difference in the tuning step which learns weights for each component of the ensemble. | Alexandrescu and Kirchoff introduced linguistic factors for neural language models. The core idea is to learn embeddings for linguistic features such as part-of-speech (POS) tags and dependency labels, augmenting the word embeddings of the input with additional features. Recent work has shown that NMT performance can also be improved by concatenating embeddings for additional word-level "factors" to source-word input embeddings @cite_1 . The input representation @math for each source input @math with factors @math thus becomes Eq. : | {
"cite_N": [
"@cite_1"
],
"mid": [
"2537667581"
],
"abstract": [
"Since the first online demonstration of Neural Machine Translation (NMT) by LISA, NMT development has recently moved from laboratory to production systems as demonstrated by several entities announcing roll-out of NMT engines to replace their existing technologies. NMT systems have a large number of training configurations and the training process of such systems is usually very long, often a few weeks, so role of experimentation is critical and important to share. In this work, we present our approach to production-ready systems simultaneously with release of online demonstrators covering a large variety of languages (12 languages, for 32 language pairs). We explore different practical choices: an efficient and evolutive open-source framework; data preparation; network architecture; additional implemented features; tuning for production; etc. We discuss about evaluation methodology, present our first findings and we finally outline further work. Our ultimate goal is to share our expertise to build competitive production systems for \"generic\" translation. We aim at contributing to set up a collaborative framework to speed-up adoption of the technology, foster further research efforts and enable the delivery and adoption to by industry of use-case specific engines integrated in real production workflows. Mastering of the technology would allow us to build translation engines suited for particular needs, outperforming current simplest uniform systems."
]
} |
1706.05070 | 2625960157 | Let @math be a set of boolean functions. We present an algorithm for learning @math from membership queries. Our algorithm asks at most @math membership queries where @math is the minimum worst case number of membership queries for learning @math . When @math is a set of halfspaces over a constant dimension space or a set of variable inequalities, our algorithm runs in polynomial time. The problem we address has practical importance in the field of program synthesis, where the goal is to synthesize a program that meets some requirements. Program synthesis has become popular especially in settings aiming to help end users. In such settings, the requirements are not provided upfront and the synthesizer can only learn them by posing membership queries to the end user. Our work enables such synthesizers to learn the exact requirements while bounding the number of membership queries. | Program synthesis has drawn a lot of attention over the last decade, especially in the setting of synthesis from examples, known as PBE (e.g., @cite_13 @cite_24 @cite_28 @cite_19 @cite_7 @cite_18 @cite_31 @cite_30 @cite_17 @cite_4 @cite_25 @cite_1 @cite_9 @cite_6 ). Commonly, PBE algorithms synthesize programs consistent with the examples, which may not capture the user intent. Some works, however, guarantee to output the target program. For example, CEGIS learns a program via equivalence queries, and oracle-based synthesis assumes that the input space is finite, which allows it to guarantee correctness by exploring all inputs (i.e., without validation queries). Synthesis has also been studied in a setting where a specification and the program's syntax are given and the goal is to find a program over this syntax meeting the specification (e.g., @cite_20 @cite_26 @cite_27 @cite_8 ). | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_31",
"@cite_13",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"2112501366",
"2146105230",
"2160985005",
"2038123353",
"2132525863",
"2294286398",
"",
"2146753709",
"2144951274",
"2060610732",
"1792831685",
"2164611950",
"",
"",
"1975680434",
"2560790486",
"1974514467",
"1905591175"
],
"abstract": [
"Text processing, tedious and error-prone even for programmers, remains one of the most alluring targets of Programming by Example. An examination of real-world text processing tasks found on help forums reveals that many such tasks, beyond simple string manipulation, involve latent hierarchical structures. We present STEPS, a programming system for processing structured and semi-structured text by example. STEPS users create and manipulate hierarchical structure by example. In a between-subject user study on fourteen computer scientists, STEPS compares favorably to traditional programming.",
"Millions of computer end users need to perform tasks over large spreadsheet data, yet lack the programming knowledge to do such tasks automatically. We present a programming by example methodology that allows end users to automate such repetitive tasks. Our methodology involves designing a domain-specific language and developing a synthesis algorithm that can learn programs in that language from user-provided examples. We present instantiations of this methodology for particular domains of tasks: (a) syntactic transformations of strings using restricted forms of regular expressions, conditionals, and loops, (b) semantic transformations of strings involving lookup in relational tables, and (c) layout transformations on spreadsheet tables. We have implemented this technology as an add-in for the Microsoft Excel Spreadsheet system and have evaluated it successfully over several benchmarks picked from various Excel help forums.",
"We present the Storyboard Programming framework, a new synthesis system designed to help programmers write imperative low-level data-structure manipulations. The goal of this system is to bridge the gap between the \"boxes-and-arrows\" diagrams that programmers often use to think about data-structure manipulation algorithms and the low-level imperative code that implements them. The system takes as input a set of partial input-output examples, as well as a description of the high-level structure of the desired solution. From this information, it is able to synthesize low-level imperative implementations in a matter of minutes. The framework is based on a new approach for combining constraint-based synthesis and abstract-interpretation-based shape analysis. The approach works by encoding both the synthesis and the abstract interpretation problem as a constraint satisfaction problem whose solution defines the desired low-level implementation. We have used the framework to synthesize several data-structure manipulations involving linked lists and binary search trees, as well as an insertion operation into an And Inverter Graph.",
"Many computer end-users, such as research scientists and business analysts, need to frequently query a database, yet lack enough programming knowledge to write a correct SQL query. To alleviate this problem, we present a programming by example technique (and its tool implementation, called SQLSynthesizer) to help end-users automate such query tasks. SQLSynthesizer takes from users an example input and output of how the database should be queried, and then synthesizes a SQL query that reproduces the example output from the example input. If the synthesized SQL query is applied to another, potentially larger, database with a similar schema, the synthesized SQL query produces a corresponding result that is similar to the example output. We evaluated SQLSynthesizer on 23 exercises from a classic database textbook and 5 forum questions about writing SQL queries. SQLSynthesizer synthesized correct answers for 15 textbook exercises and all 5 forum questions, and it did so from relatively small examples.",
"We describe the design of a string programming expression language that supports restricted forms of regular expressions, conditionals and loops. The language is expressive enough to represent a wide variety of string manipulation tasks that end-users struggle with. We describe an algorithm based on several novel concepts for synthesizing a desired program in this language from input-output examples. The synthesis algorithm is very efficient taking a fraction of a second for various benchmark examples. The synthesis algorithm is interactive and has several desirable features: it can rank multiple solutions and has fast convergence, it can detect noise in the user input, and it supports an active interaction model wherein the user is prompted to provide outputs on inputs that may have multiple computational interpretations. The algorithm has been implemented as an interactive add-in for Microsoft Excel spreadsheet system. The prototype tool has met the golden test - it has synthesized part of itself, and has been used to solve problems beyond author's imagination.",
"Many advanced programming tools---for both end-users and expert developers---rely on program synthesis to automatically generate implementations from high-level specifications. These tools often need to employ tricky, custom-built synthesis algorithms because they require synthesized programs to be not only correct, but also optimal with respect to a desired cost metric, such as program size. Finding these optimal solutions efficiently requires domain-specific search strategies, but existing synthesizers hard-code the strategy, making them difficult to reuse. This paper presents metasketches, a general framework for specifying and solving optimal synthesis problems. metasketches make the search strategy a part of the problem definition by specifying a fragmentation of the search space into an ordered set of classic sketches. We provide two cooperating search algorithms to effectively solve metasketches. A global optimizing search coordinates the activities of local searches, informing them of the costs of potentially-optimal solutions as they explore different regions of the candidate space in parallel. The local searches execute an incremental form of counterexample-guided inductive synthesis to incorporate information sent from the global search. We present Synapse, an implementation of these algorithms, and show that it effectively solves optimal synthesis problems with a variety of different cost functions. In addition, metasketches can be used to accelerate classic (non-optimal) synthesis by explicitly controlling the search strategy, and we show that Synapse solves classic synthesis problems that state-of-the-art tools cannot.",
"",
"With hundreds of millions of users, spreadsheets are one of the most important end-user applications. Spreadsheets are easy to use and allow users great flexibility in storing data. This flexibility comes at a price: users often treat spreadsheets as a poor man's database, leading to creative solutions for storing high-dimensional data. The trouble arises when users need to answer queries with their data. Data manipulation tools make strong assumptions about data layouts and cannot read these ad-hoc databases. Converting data into the appropriate layout requires programming skills or a major investment in manual reformatting. The effect is that a vast amount of real-world data is \"locked-in\" to a proliferation of one-off formats. We introduce FlashRelate, a synthesis engine that lets ordinary users extract structured relational data from spreadsheets without programming. Instead, users extract data by supplying examples of output relational tuples. FlashRelate uses these examples to synthesize a program in Flare. Flare is a novel extraction language that extends regular expressions with geometric constructs. An interactive user interface on top of FlashRelate lets end users extract data by point-and-click. We demonstrate that correct Flare programs can be synthesized in seconds from a small set of examples for 43 real-world scenarios. Finally, our case study demonstrates FlashRelate's usefulness addressing the widespread problem of data trapped in corporate and government formats.",
"Various document types that combine model and view (e.g., text files, webpages, spreadsheets) make it easy to organize (possibly hierarchical) data, but make it difficult to extract raw data for any further manipulation or querying. We present a general framework FlashExtract to extract relevant data from semi-structured documents using examples. It includes: (a) an interaction model that allows end-users to give examples to extract various fields and to relate them in a hierarchical organization using structure and sequence constructs. (b) an inductive synthesis algorithm to synthesize the intended program from few examples in any underlying domain-specific language for data extraction that has been built using our specified algebra of few core operators (map, filter, merge, and pair). We describe instantiation of our framework to three different domains: text files, webpages, and spreadsheets. On our benchmark comprising 75 documents, FlashExtract is able to extract intended data using an average of 2.36 examples in 0.84 seconds per field.",
"Inductive synthesis, or programming-by-examples (PBE) is gaining prominence with disruptive applications for automating repetitive tasks in end-user programming. However, designing, developing, and maintaining an effective industrial-quality inductive synthesizer is an intellectual and engineering challenge, requiring 1-2 man-years of effort. Our novel observation is that many PBE algorithms are a natural fall-out of one generic meta-algorithm and the domain-specific properties of the operators in the underlying domain-specific language (DSL). The meta-algorithm propagates example-based constraints on an expression to its subexpressions by leveraging associated witness functions, which essentially capture the inverse semantics of the underlying operator. This observation enables a novel program synthesis methodology called data-driven domain-specific deduction (D4), where domain-specific insight, provided by the DSL designer, is separated from the synthesis algorithm. Our FlashMeta framework implements this methodology, allowing synthesizer developers to generate an efficient synthesizer from the mere DSL definition (if properties of the DSL operators have been modeled). In our case studies, we found that 10+ existing industrial-quality mass-market applications based on PBE can be cast as instances of D4. Our evaluation includes reimplementation of some prior works, which in FlashMeta become more efficient, maintainable, and extensible. As a result, FlashMeta-based PBE tools are deployed in several industrial products, including Microsoft PowerShell 3.0 for Windows 10, Azure Operational Management Suite, and Microsoft Cortana digital assistant.",
"Programming by demonstration enables users to easily personalize their applications, automating repetitive tasks simply by executing a few examples. We formalize programming by demonstration as a machine learning problem: given the changes in the application state that result from the user's demonstrated actions, learn the general program that maps from one application state to the next. We present a methodology for learning in this space of complex functions. First we extend version spaces to learn arbitrary functions, not just concepts. Then we introduce the version space algebra, a method for composing simpler version spaces to construct more complex spaces. Finally, we apply our version space algebra to the text-editing domain and describe an implemented system called SMARTedit that learns repetitive text-editing procedures by example. We evaluate our approach by measuring the number of examples required for the system to learn a procedure that works on the remainder of examples, and by an informal user study measuring the effort users spend using our system versus performing the task by hand. The results show that SMARTedit is capable of generalizing correctly from as few as one or two examples, and that users generally save a significant amount of effort when completing tasks with SMARTedit's help.",
"Every day, millions of computer end-users need to perform tasks over large, tabular data, yet lack the programming knowledge to do such tasks automatically. In this work, we present an automatic technique that takes from a user an example of how the user needs to transform a table of data, and provides to the user a program that implements the transformation described by the example. In particular, we present a language of programs TableProg that can describe transformations that real users require.We then present an algorithm ProgFromEx that takes an example input and output table, and infers a program in TableProg that implements the transformation described by the example. When the program is applied to the example input, it reproduces the example output. When the program is applied to another, potentially larger, table with a 'similar' layout as the example input table, then the program produces a corresponding table with a layout that is similar to the example output table. A user can apply ProgFromEx interactively, providing multiple small examples to obtain a program that implements the transformation that the user desires. Moreover, ProgFromEx can help identify 'noisy' examples that contain errors. To evaluate the practicality of TableProg and ProgFromEx, we implemented ProgFromEx as a module for the Microsoft Excel spreadsheet program. We applied the module to automatically implement over 50 table transformations specified by endusers through examples on online Excel help forums. In seconds, ProgFromEx found programs that satisfied the examples and could be applied to larger input tables. This experience demonstrates that TableProg and ProgFromEx can significantly automate the tasks over tabular data that users need to perform.",
"",
"",
"",
"",
"We describe PSketch, a program synthesizer that helps programmers implement concurrent data structures. The system is based on the concept of sketching, a form of synthesis that allows programmers to express their insight about an implementation as a partial program: a sketch. The synthesizer automatically completes the sketch to produce an implementation that matches a given correctness criteria. PSketch is based on a new counterexample-guided inductive synthesis algorithm (CEGIS) that generalizes the original sketch synthesis algorithm from Solar-Lezama et.al. to cope efficiently with concurrent programs. The new algorithm produces a correct implementation by iteratively generating candidate implementations, running them through a verifier, and if they fail, learning from the counterexample traces to produce a better candidate; converging to a solution in a handful of iterations. PSketch also extends Sketch with higher-level sketching constructs that allow the programmer to express her insight as a \"soup\" of ingredients from which complicated code fragments must be assembled. Such sketches can be viewed as syntactic descriptions of huge spaces of candidate programs (over 108 candidates for some sketches we resolved). We have used the PSketch system to implement several classes of concurrent data structures, including lock-free queues and concurrent sets with fine-grained locking. We have also sketched some other concurrent objects including a sense-reversing barrier and a protocol for the dining philosophers problem; all these sketches resolved in under an hour.",
"Input-output examples are a simple and accessible way of describing program behaviour. Program synthesis from input-output examples has the potential of extending the range of computational tasks achievable by end-users who have no programming knowledge, but can articulate their desired computations by describing input-output behaviour. In this paper, we present Escher, a generic and efficient algorithm that interacts with the user via input-output examples, and synthesizes recursive programs implementing intended behaviour. Escher is parameterized by the components (instructions) that can be used in the program, thus providing a generic synthesis algorithm that can be instantiated to suit different domains. To search through the space of programs, Escher adopts a novel search strategy that utilizes special data structures for inferring conditionals and synthesizing recursive procedures. Our experimental evaluation of Escher demonstrates its ability to efficiently synthesize a wide range of programs, manipulating integers, lists, and trees. Moreover, we show that Escher outperforms a state-of-the-art SAT-based synthesis tool from the literature."
]
} |
1706.05084 | 2626499281 | Topic models have been extensively used to organize and interpret the contents of large, unstructured corpora of text documents. Although topic models often perform well on traditional training vs. test set evaluations, it is often the case that the results of a topic model do not align with human interpretation. This interpretability fallacy is largely due to the unsupervised nature of topic models, which prohibits any user guidance on the results of a model. In this paper, we introduce a semi-supervised method called topic supervised non-negative matrix factorization (TS-NMF) that enables the user to provide labeled example documents to promote the discovery of more meaningful semantic structure of a corpus. In this way, the results of TS-NMF better match the intuition and desired labeling of the user. The core of TS-NMF relies on solving a non-convex optimization problem for which we derive an iterative algorithm that is shown to be monotonic and convergent to a local optimum. We demonstrate the practical utility of TS-NMF on the Reuters and PubMed corpora, and find that TS-NMF is especially useful for conceptual or broad topics, where topic key terms are not well understood. Although identifying an optimal latent structure for the data is not a primary objective of the proposed approach, we find that TS-NMF achieves higher weighted Jaccard similarity scores than the contemporary methods, (unsupervised) NMF and latent Dirichlet allocation, at supervision rates as low as 10 to 20 . | A corpus of @math documents and @math terms can be represented by a @math term-document matrix @math containing non-negative entries @math that quantify the importance of term @math in document @math . The choice of weights @math is dependent upon the application, but is typically calculated using some function of term frequency (TF) and inverse document frequency (IDF) (see @cite_12 for a review). Mathematically, topic models are mappings of @math to a lower dimensional representation of @math involving the @math topics describing the documents. Existing topic modeling approaches generally fall into two classes of methods: matrix decomposition methods, which seek a low dimensional representation of @math through a factorization into two or more low-rank matrices, and probabilistic topic modeling methods, which seek a generative statistical model for @math . Here, we describe each class of methods in more detail, paying special attention to the works that are most closely related to TS-NMF. | {
"cite_N": [
"@cite_12"
],
"mid": [
"1978394996"
],
"abstract": [
"The experimental evidence accumulated over the past 20 years indicates that textindexing systems based on the assignment of appropriately weighted single terms produce retrieval results that are superior to those obtainable with other more elaborate text representations. These results depend crucially on the choice of effective term weighting systems. This paper summarizes the insights gained in automatic term weighting, and provides baseline single term indexing models with which other more elaborate content analysis procedures can be compared."
]
} |
1706.05084 | 2626499281 | Topic models have been extensively used to organize and interpret the contents of large, unstructured corpora of text documents. Although topic models often perform well on traditional training vs. test set evaluations, it is often the case that the results of a topic model do not align with human interpretation. This interpretability fallacy is largely due to the unsupervised nature of topic models, which prohibits any user guidance on the results of a model. In this paper, we introduce a semi-supervised method called topic supervised non-negative matrix factorization (TS-NMF) that enables the user to provide labeled example documents to promote the discovery of more meaningful semantic structure of a corpus. In this way, the results of TS-NMF better match the intuition and desired labeling of the user. The core of TS-NMF relies on solving a non-convex optimization problem for which we derive an iterative algorithm that is shown to be monotonic and convergent to a local optimum. We demonstrate the practical utility of TS-NMF on the Reuters and PubMed corpora, and find that TS-NMF is especially useful for conceptual or broad topics, where topic key terms are not well understood. Although identifying an optimal latent structure for the data is not a primary objective of the proposed approach, we find that TS-NMF achieves higher weighted Jaccard similarity scores than the contemporary methods, (unsupervised) NMF and latent Dirichlet allocation, at supervision rates as low as 10 to 20 . | Other related extensions of NMF include constrained NMF @cite_11 , and semi-supervised NMF @cite_19 . Constrained NMF assumes that some subset of columns of @math have class labels that dictate their location in latent space. In this formulation, one constrains the matrix @math so as to enforce the relation that if the @math th and @math th columns of @math have the same class labels, then the @math th and @math th rows of @math are equal. Semi-supervised NMF was developed with the objective of identifying the clusters of each column of @math . With this method, the user can provide pairwise constraints on the columns of @math , specifying whether they must or cannot be clustered together. The minimization problem in ) is reformulated as a non-negative tri-factorization of the similarity matrix that provides the clustering information. As we will see in the next section, each of these methods permits much stronger supervision than TS-NMF in that our proposed method constrains only which of a subset of topics are not allowed to be in some subset of documents. | {
"cite_N": [
"@cite_19",
"@cite_11"
],
"mid": [
"2104298926",
"1568451138"
],
"abstract": [
"We present a methodology for analyzing polyphonic musical passages comprised of notes that exhibit a harmonically fixed spectral profile (such as piano notes). Taking advantage of this unique note structure, we can model the audio content of the musical passage by a linear basis transform and use non-negative matrix decomposition methods to estimate the spectral profile and the temporal information of every note. This approach results in a very simple and compact system that is not knowledge-based, but rather learns notes by observation.",
"Non-negative matrix factorization (NMF), as a useful decomposition method for multivariate data, has been widely used in pattern recognition, information retrieval and computer vision. NMF is an effective algorithm to find the latent structure of the data and leads to a parts-based representation. However, NMF is essentially an unsupervised method and can not make use of label information. In this paper, we propose a novel semi-supervised matrix decomposition method, called Constrained Non-negative Matrix Factorization, which takes the label information as additional constraints. Specifically, we require that the data points sharing the same label have the same coordinate in the new representation space. This way, the learned representations can have more discriminating power. We demonstrate the effectiveness of this novel algorithm through a set of evaluations on real world applications."
]
} |
1706.05084 | 2626499281 | Topic models have been extensively used to organize and interpret the contents of large, unstructured corpora of text documents. Although topic models often perform well on traditional training vs. test set evaluations, it is often the case that the results of a topic model do not align with human interpretation. This interpretability fallacy is largely due to the unsupervised nature of topic models, which prohibits any user guidance on the results of a model. In this paper, we introduce a semi-supervised method called topic supervised non-negative matrix factorization (TS-NMF) that enables the user to provide labeled example documents to promote the discovery of more meaningful semantic structure of a corpus. In this way, the results of TS-NMF better match the intuition and desired labeling of the user. The core of TS-NMF relies on solving a non-convex optimization problem for which we derive an iterative algorithm that is shown to be monotonic and convergent to a local optimum. We demonstrate the practical utility of TS-NMF on the Reuters and PubMed corpora, and find that TS-NMF is especially useful for conceptual or broad topics, where topic key terms are not well understood. Although identifying an optimal latent structure for the data is not a primary objective of the proposed approach, we find that TS-NMF achieves higher weighted Jaccard similarity scores than the contemporary methods, (unsupervised) NMF and latent Dirichlet allocation, at supervision rates as low as 10 to 20 . | Latent Semantic Indexing (LSI) @cite_20 @cite_17 is a popular matrix decomposition method utilized in topic modeling. LSI seeks a latent representation of @math via a singular value decomposition (SVD). Though not directly applied to topic modeling, @cite_22 and @cite_1 introduced supervised versions of the decomposition methods Principal Components Analysis (PCA) and SVD, respectively. In these works, supervision consisted of incorporating auxiliary information for matrix decomposition in the form of a linear model. | {
"cite_N": [
"@cite_1",
"@cite_22",
"@cite_20",
"@cite_17"
],
"mid": [
"1992569964",
"2019176983",
"2147152072",
""
],
"abstract": [
"A supervised singular value decomposition (SupSVD) model has been developed for supervised dimension reduction where the low rank structure of the data of interest is potentially driven by additional variables measured on the same set of samples. The SupSVD model can make use of the information in the additional variables to accurately extract underlying structures that are more interpretable. The model is general and includes the principal component analysis model and the reduced rank regression model as two extreme cases. The model is formulated in a hierarchical fashion using latent variables, and a modified expectation-maximization algorithm for parameter estimation is developed, which is computationally efficient. The asymptotic properties for the estimated parameters are derived. We use comprehensive simulations and a real data example to illustrate the advantages of the SupSVD model.",
"In regression problems where the number of predictors greatly exceeds the number of observations, conventional regression techniques may produce unsatisfactory results. We describe a technique called supervised principal components that can be applied to this type of problem. Supervised principal components is similar to conventional principal components analysis except that it uses a subset of the predictors selected based on their association with the outcome. Supervised principal components can be applied to regression and generalized regression problems, such as survival analysis. It compares favorably to other techniques for this type of problem, and can also account for the effects of other covariates and help identify which predictor variables are most important. We also provide asymptotic consistency results to help support our empirical findings. These methods could become important tools for DNA microarray data, where they may be used to more accurately diagnose and treat cancer.",
"A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.",
""
]
} |
1706.05084 | 2626499281 | Topic models have been extensively used to organize and interpret the contents of large, unstructured corpora of text documents. Although topic models often perform well on traditional training vs. test set evaluations, it is often the case that the results of a topic model do not align with human interpretation. This interpretability fallacy is largely due to the unsupervised nature of topic models, which prohibits any user guidance on the results of a model. In this paper, we introduce a semi-supervised method called topic supervised non-negative matrix factorization (TS-NMF) that enables the user to provide labeled example documents to promote the discovery of more meaningful semantic structure of a corpus. In this way, the results of TS-NMF better match the intuition and desired labeling of the user. The core of TS-NMF relies on solving a non-convex optimization problem for which we derive an iterative algorithm that is shown to be monotonic and convergent to a local optimum. We demonstrate the practical utility of TS-NMF on the Reuters and PubMed corpora, and find that TS-NMF is especially useful for conceptual or broad topics, where topic key terms are not well understood. Although identifying an optimal latent structure for the data is not a primary objective of the proposed approach, we find that TS-NMF achieves higher weighted Jaccard similarity scores than the contemporary methods, (unsupervised) NMF and latent Dirichlet allocation, at supervision rates as low as 10 to 20 . | Though not our focus in this paper, probabilistic topic models have been widely applied (see @cite_10 for a review). Probabilistic topic models seek a generative statistical model for the matrix @math . The most prominent of these approaches is latent Dirichlet allocation (LDA) @cite_8 @cite_5 , which models the generation of @math as a posterior distribution arising from the probability distributions describing the occurrence of documents over topics as well as the occurrence of topics over terms. LDA takes a Bayesian approach to topic modeling and assumes a Dirichlet prior for the sets of weights describing topics over terms and documents over topics. In this way, the resulting model forms a probability distribution, rather than unconstrained weights on @math and @math . Recent extensions of LDA have considered alternative prior specifications and have begun to explore supervision through apriori topic knowledge @cite_7 . | {
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_7",
"@cite_8"
],
"mid": [
"",
"2138107145",
"2048330531",
"1880262756"
],
"abstract": [
"",
"Probabilistic topic modeling provides a suite of tools for the unsupervised analysis of large collections of documents. Topic modeling algorithms can uncover the underlying themes of a collection and decompose its documents according to those themes. This analysis can be used for corpus exploration, document search, and a variety of prediction problems. In this tutorial, I will review the state-of-the-art in probabilistic topic models. I will describe the three components of topic modeling: (1) Topic modeling assumptions (2) Algorithms for computing with topic models (3) Applications of topic models In (1), I will describe latent Dirichlet allocation (LDA), which is one of the simplest topic models, and then describe a variety of ways that we can build on it. These include dynamic topic models, correlated topic models, supervised topic models, author-topic models, bursty topic models, Bayesian nonparametric topic models, and others. I will also discuss some of the fundamental statistical ideas that are used in building topic models, such as distributions on the simplex, hierarchical Bayesian modeling, and models of mixed-membership. In (2), I will review how we compute with topic models. I will describe approximate posterior inference for directed graphical models using both sampling and variational inference, and I will discuss the practical issues and pitfalls in developing these algorithms for topic models. Finally, I will describe some of our most recent work on building algorithms that can scale to millions of documents and documents arriving in a stream. In (3), I will discuss applications of topic models. These include applications to images, music, social networks, and other data in which we hope to uncover hidden patterns. I will describe some of our recent work on adapting topic modeling algorithms to collaborative filtering, legislative modeling, and bibliometrics without citations. Finally, I will discuss some future directions and open research problems in topic models.",
"Latent Dirichlet Allocation is an unsupervised graphical model which can discover latent topics in unlabeled data. We propose a mechanism for adding partial supervision, called topic-in-set knowledge, to latent topic modeling. This type of supervision can be used to encourage the recovery of topics which are more relevant to user modeling goals than the topics which would be recovered otherwise. Preliminary experiments on text datasets are presented to demonstrate the potential effectiveness of this method.",
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model."
]
} |
1706.05122 | 2624815752 | We propose a novel embedding model that represents relationships among several elements in bibliographic information with high representation ability and flexibility. Based on this model, we present a novel search system that shows the relationships among the elements in the ACL Anthology Reference Corpus. The evaluation results show that our model can achieve a high prediction ability and produce reasonable search results. | Some previous studies embedded vectors to the elements. Among them, large-scale information network embedding (LINE) @cite_1 embedded a vector to each node in information network. LINE handles single type of information and prepares a network for each element separately. By contrast, our model simultaneously handles all the types of information. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1888005072"
],
"abstract": [
"This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online https: github.com tangjianpku LINE ."
]
} |
1706.04769 | 2625848253 | This paper proposes a new family of algorithms for training neural networks (NNs). These are based on recent developments in the field of non-convex optimization, going under the general name of successive convex approximation (SCA) techniques. The basic idea is to iteratively replace the original (non-convex, highly dimensional) learning problem with a sequence of (strongly convex) approximations, which are both accurate and simple to optimize. Differently from similar ideas (e.g., quasi-Newton algorithms), the approximations can be constructed using only first-order information of the neural network function, in a stochastic fashion, while exploiting the overall structure of the learning problem for a faster convergence. We discuss several use cases, based on different choices for the loss function (e.g., squared loss and cross-entropy loss), and for the regularization of the NN's weights. We experiment on several medium-sized benchmark problems, and on a large-scale dataset involving simulated physical data. The results show how the algorithm outperforms state-of-the-art techniques, providing faster convergence to a better minimum. Additionally, we show how the algorithm can be easily parallelized over multiple computational units without hindering its performance. In particular, each computational unit can optimize a tailored surrogate function defined on a randomly assigned subset of the input variables, whose dimension can be selected depending entirely on the available computational power. | The idea of successively replacing a non-convex objective with a series of convex approximations is not novel in the optimization literature, and it appears in a wide range of previous approaches, including convex-concave procedures @cite_39 and proximal minimization algorithms @cite_26 . However, most previous methods imposed stringent conditions on the approximant, such as it being a global upper bound of the original cost (e.g., the SUM algorithm in @cite_4 ). The SCA methods that we consider here originated in the context of multi-agent systems @cite_25 , and were later extended to deal with general non-convex optimization problems @cite_20 @cite_40 @cite_1 . Under this framework, the (convex) approximation is only required to keep the first-order information of the original (non-convex) cost with respect to the current estimate, thus making its definition highly flexible. Additionally, the optimization problems can be easily decomposed into subproblems (see later in Section ), and convergence to a stationary point can be guaranteed under mild conditions. Several extensions were made to the basic framework, most notably SCA techniques for decentralized environments @cite_11 @cite_2 , asynchronous processors @cite_10 , and stochastic updates @cite_46 . | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_1",
"@cite_39",
"@cite_40",
"@cite_2",
"@cite_46",
"@cite_10",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"2027982384",
"2050968963",
"2295566694",
"",
"2058361915",
"2463661403",
"2661056902",
"2475736732",
"1974661392",
"2237201002",
"2266148951"
],
"abstract": [
"We introduce a proximal alternating linearized minimization (PALM) algorithm for solving a broad class of nonconvex and nonsmooth minimization problems. Building on the powerful Kurdyka---?ojasiewicz property, we derive a self-contained convergence analysis framework and establish that each bounded sequence generated by PALM globally converges to a critical point. Our approach allows to analyze various classes of nonconvex-nonsmooth problems and related nonconvex proximal forward---backward algorithms with semi-algebraic problem's data, the later property being shared by many functions arising in a wide variety of fundamental applications. A by-product of our framework also shows that our results are new even in the convex setting. As an illustration of the results, we derive a new and simple globally convergent algorithm for solving the sparse nonnegative matrix factorization problem.",
"The block coordinate descent (BCD) method is widely used for minimizing a continuous function @math of several block variables. At each iteration of this method, a single block of variables is optimized, while the remaining variables are held fixed. To ensure the convergence of the BCD method, the subproblem of each block variable needs to be solved to its unique global optimal. Unfortunately, this requirement is often too restrictive for many practical scenarios. In this paper, we study an alternative inexact BCD approach which updates the variable blocks by successively minimizing a sequence of approximations of @math which are either locally tight upper bounds of @math or strictly convex local approximations of @math . The main contributions of this work include the characterizations of the convergence conditions for a fairly wide class of such methods, especially for the cases where the objective functions are either nondifferentiable or nonconvex. Our results unify and extend the existing convergence results ...",
"In Part I of this paper, we proposed and analyzed a novel algorithmic framework for the minimization of a nonconvex (smooth) objective function, subject to nonconvex constraints, based on inner convex approximations. This Part II is devoted to the application of the framework to some resource allocation problems in communication networks. In particular, we consider two non-trivial case-study applications, namely: (generalizations of) i) the rate profile maximization in MIMO interference broadcast networks; and the ii) the max-min fair multicast multigroup beamforming problem in a multi-cell environment. We develop a new class of algorithms enjoying the following distinctive features: i) they are across the base stations (with limited signaling) and lead to subproblems whose solutions are computable in closed form; and ii) differently from current relaxation-based schemes (e.g., semidefinite relaxation), they are proved to always converge to d-stationary solutions of the aforementioned class of nonconvex problems. Numerical results show that the proposed (distributed) schemes achieve larger worst-case rates (resp. signal-to-noise interference ratios) than state-of-the-art centralized ones while having comparable computational complexity.",
"",
"We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss–Seidel (i.e., sequential) ones, as well as virtually all possibilities “in between” with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results on LASSO, logistic regression, and some nonconvex quadratic problems show that the new method consistently outperforms existing algorithms.",
"We study nonconvex distributed optimization in multiagent networks where the communications between nodes is modeled as a time-varying sequence of arbitrary digraphs. We introduce a novel broadcast-based distributed algorithmic framework for the (constrained) minimization of the sum of a smooth (possibly nonconvex and nonseparable) function, i.e., the agents' sum-utility, plus a convex (possibly nonsmooth and nonseparable) regularizer. The latter is usually employed to enforce some structure in the solution, typically sparsity. The proposed method hinges on Successive Convex Approximation (SCA) techniques coupled with i) a tracking mechanism instrumental to locally estimate the gradients of agents' cost functions; and ii) a novel broadcast protocol to disseminate information and distribute the computation among the agents. Asymptotic convergence to stationary solutions is established. A key feature of the proposed algorithm is that it neither requires the double-stochasticity of the consensus matrices (but only column stochasticity) nor the knowledge of the graph sequence to implement. To the best of our knowledge, the proposed framework is the first broadcast-based distributed algorithm for convex and nonconvex constrained optimization over arbitrary, time-varying digraphs. Numerical results show that our algorithm outperforms current schemes on both convex and nonconvex problems.",
"We consider supervised learning problems over training sets in which both the number of training examples and the dimension of the feature vectors are large. We focus on the case where the loss function defining the quality of the parameter we wish to estimate may be non-convex, but also has a convex regularization. We propose a Doubly Stochastic Successive Convex approximation scheme (DSSC) able to handle non-convex regularized expected risk minimization. The method operates by decomposing the decision variable into blocks and operating on random subsets of blocks at each step. The algorithm belongs to the family of successive convex approximation methods since we replace the original non-convex stochastic objective by a strongly convex sample surrogate function, and solve the resulting convex program, for each randomly selected block in parallel. The method operates on subsets of features (block coordinate methods) and training examples (stochastic approximation) at each step. In contrast to many stochastic convex methods whose almost sure behavior is not guaranteed in non-convex settings, DSSC attains almost sure convergence to a stationary solution of the problem. Numerical experiments on a non-convex variant of a lasso regression problem show that DSSC performs favorably in this setting.",
"We propose a novel asynchronous parallel algorithmic framework for the minimization of the sum of a smooth nonconvex function and a convex nonsmooth regularizer, subject to both convex and nonconvex constraints. The proposed framework hinges on successive convex approximation techniques and a novel probabilistic model that captures key elements of modern computational architectures and asynchronous implementations in a more faithful way than current state of the art models. Key features of the proposed framework are: i) it accommodates inconsistent read, meaning that components of the vector variables may be written by some cores while being simultaneously read by others; ii) it covers in a unified way several different specific solution methods, and iii) it accommodates a variety of possible parallel computing architectures. Almost sure convergence to stationary solutions is proved. Numerical results, reported in the companion paper, on both convex and nonconvex problems show our method can consistently outperform existing parallel asynchronous algorithms.",
"We propose a novel decomposition framework for the distributed optimization of general nonconvex sum-utility functions arising naturally in the system design of wireless multi-user interfering systems. Our main contributions are i) the development of the first class of (inexact) Jacobi best-response algorithms with provable convergence, where all the users simultaneously and iteratively solve a suitably convexified version of the original sum-utility optimization problem; ii) the derivation of a general dynamic pricing mechanism that provides a unified view of existing pricing schemes that are based, instead, on heuristics; and iii) a framework that can be easily particularized to well-known applications, giving rise to very efficient practical (Jacobi or Gauss-Seidel) algorithms that outperform existing ad hoc methods proposed for very specific problems. Interestingly, our framework contains as special cases well-known gradient algorithms for nonconvex sum-utility problems, and many block-coordinate descent schemes for convex functions.",
"In this two-part paper, we propose a general algorithmic framework for the minimization of a nonconvex smooth function subject to nonconvex smooth constraints. The algorithm solves a sequence of (separable) strongly convex problems and mantains feasibility at each iteration. Convergence to a stationary solution of the original nonconvex optimization is established. Our framework is very general and flexible; it unifies several existing Successive Convex Approximation (SCA)-based algorithms such as (proximal) gradient or Newton type methods, block coordinate (parallel) descent schemes, difference of convex functions methods, and improves on their convergence properties. More importantly, and differently from current SCA approaches, it naturally leads to distributed and parallelizable implementations for a large class of nonconvex problems. This Part I is devoted to the description of the framework in its generality. In Part II we customize our general methods to several multi-agent optimization problems, mainly in communications and networking; the result is a new class of (distributed) algorithms that compare favorably to existing ad-hoc (centralized) schemes (when they exist).",
"We study nonconvex distributed optimization in multiagent networks with time-varying (nonsymmetric) connectivity. We introduce the first algorithmic framework for the distributed minimization of the sum of a smooth (possibly nonconvex and nonseparable) function—the agents’ sum-utility—plus a convex (possibly nonsmooth and nonseparable) regularizer. The latter is usually employed to enforce some structure in the solution, typically sparsity. The proposed method hinges on successive convex approximation techniques while leveraging dynamic consensus as a mechanism to distribute the computation among the agents: each agent first solves (possibly inexactly) a local convex approximation of the nonconvex original problem, and then performs local averaging operations. Asymptotic convergence to (stationary) solutions of the nonconvex problem is established. Our algorithmic framework is then customized to a variety of convex and nonconvex problems in several fields, including signal processing, communications, networking, and machine learning. Numerical results show that the new method compares favorably to existing distributed algorithms on both convex and nonconvex problems."
]
} |
1706.04769 | 2625848253 | This paper proposes a new family of algorithms for training neural networks (NNs). These are based on recent developments in the field of non-convex optimization, going under the general name of successive convex approximation (SCA) techniques. The basic idea is to iteratively replace the original (non-convex, highly dimensional) learning problem with a sequence of (strongly convex) approximations, which are both accurate and simple to optimize. Differently from similar ideas (e.g., quasi-Newton algorithms), the approximations can be constructed using only first-order information of the neural network function, in a stochastic fashion, while exploiting the overall structure of the learning problem for a faster convergence. We discuss several use cases, based on different choices for the loss function (e.g., squared loss and cross-entropy loss), and for the regularization of the NN's weights. We experiment on several medium-sized benchmark problems, and on a large-scale dataset involving simulated physical data. The results show how the algorithm outperforms state-of-the-art techniques, providing faster convergence to a better minimum. Additionally, we show how the algorithm can be easily parallelized over multiple computational units without hindering its performance. In particular, each computational unit can optimize a tailored surrogate function defined on a randomly assigned subset of the input variables, whose dimension can be selected depending entirely on the available computational power. | To the best of our knowledge, the only works that applied SCA techniques for training NNs are @cite_30 @cite_7 . However, both are specific to a distributed setting with full batch updates, which is not scalable to the training of NNs with a large-scale dataset or with many parameters. The present paper significantly extends @cite_7 to the case of stochastic updates computed from mini-batches of the training data. More tangentially related to our paper are the investigations in @cite_13 and @cite_21 , which applied SCA techniques for training support vector models, always in a batch setting. | {
"cite_N": [
"@cite_30",
"@cite_21",
"@cite_13",
"@cite_7"
],
"mid": [
"2553571116",
"2550762110",
"2343897292",
"2543147921"
],
"abstract": [
"The aim of this paper is to develop a theoretical framework for training neural network (NN) models, when data is distributed over a set of agents that are connected to each other through a sparse network topology. The framework builds on a distributed convexification technique, while leveraging dynamic consensus to propagate the information over the network. It can be customized to work with different loss and regularization functions, typically used when training NN models, while guaranteeing provable convergence to a stationary solution under mild assumptions. Interestingly, it naturally leads to distributed architectures where agents solve local optimization problems exploiting parallel multi-core processors. Numerical results corroborate our theoretical findings, and assess the performance for parallel and distributed training of neural networks.",
"",
"The semi-supervised support vector machine (S 3 VM) is a well-known algorithm for performing semi-supervised inference under the large margin principle. In this paper, we are interested in the problem of training a S 3 VM when the labeled and unlabeled samples are distributed over a network of interconnected agents. In particular, the aim is to design a distributed training protocol over networks, where communication is restricted only to neighboring agents and no coordinating authority is present. Using a standard relaxation of the original S 3 VM, we formulate the training problem as the distributed minimization of a non-convex social cost function. To find a (stationary) solution in a distributed manner, we employ two different strategies: (i) a distributed gradient descent algorithm; (ii) a recently developed framework for In-Network Nonconvex Optimization (NEXT), which is based on successive convexifications of the original problem, interleaved by state diffusion steps. Our experimental results show that the proposed distributed algorithms have comparable performance with respect to a centralized implementation, while highlighting the pros and cons of the proposed solutions. To the date, this is the first work that paves the way toward the broad field of distributed semi-supervised learning over networks.",
"Abstract The aim of this paper is to develop a general framework for training neural networks (NNs) in a distributed environment, where training data is partitioned over a set of agents that communicate with each other through a sparse, possibly time-varying, connectivity pattern. In such distributed scenario, the training problem can be formulated as the (regularized) optimization of a non-convex social cost function, given by the sum of local (non-convex) costs, where each agent contributes with a single error term defined with respect to its local dataset. To devise a flexible and efficient solution, we customize a recently proposed framework for non-convex optimization over networks, which hinges on a (primal) convexification–decomposition technique to handle non-convexity, and a dynamic consensus procedure to diffuse information among the agents. Several typical choices for the training criterion (e.g., squared loss, cross entropy, etc.) and regularization (e.g., l 2 norm, sparsity inducing penalties, etc.) are included in the framework and explored along the paper. Convergence to a stationary solution of the social non-convex problem is guaranteed under mild assumptions. Additionally, we show a principled way allowing each agent to exploit a possible multi-core architecture (e.g., a local cloud) in order to parallelize its local optimization step, resulting in strategies that are both distributed (across the agents) and parallel (inside each agent) in nature. A comprehensive set of experimental results validate the proposed approach."
]
} |
1706.04641 | 2626603517 | We introduce pseudo-deterministic interactive proofs (psdAM): interactive proof systems for search problems where the verifier is guaranteed with high probability to output the same output on different executions. As in the case with classical interactive proofs, the verifier is a probabilistic polynomial time algorithm interacting with an untrusted powerful prover. We view pseudo-deterministic interactive proofs as an extension of the study of pseudo-deterministic randomized polynomial time algorithms: the goal of the latter is to find canonical solutions to search problems whereas the goal of the former is to prove that a solution to a search problem is canonical to a probabilistic polynomial time verifier. Alternatively, one may think of the powerful prover as aiding the probabilistic polynomial time verifier to find canonical solutions to search problems, with high probability over the randomness of the verifier. The challenge is that pseudo-determinism should hold not only with respect to the randomness, but also with respect to the prover: a malicious prover should not be able to cause the verifier to output a solution other than the unique canonical one. | Finally, we mention that recently another notion of uniqueness has been studied in the context of interactive proofs by @cite_9 , called unambiguous interactive proofs where the prover has a unique successful strategy. This again differs from pseudo-deterministic interactive proofs, in that we don't assume (nor guarantee) a unique strategy by the successful prover, we only require that the prover proves that the solution (or witness) the verifier receives is unique (with high probability). | {
"cite_N": [
"@cite_9"
],
"mid": [
"2416431186"
],
"abstract": [
"The celebrated IP=PSPACE Theorem of Lund et-al. (J.ACM 1992) and Shamir (J.ACM 1992), allows an all-powerful but untrusted prover to convince a polynomial-time verifier of the validity of extremely complicated statements (as long as they can be evaluated using polynomial space). The interactive proof system designed for this purpose requires a polynomial number of communication rounds and an exponential-time (polynomial-space complete) prover. In this paper, we study the power of more efficient interactive proof systems. Our main result is that for every statement that can be evaluated in polynomial time and bounded-polynomial space there exists an interactive proof that satisfies the following strict efficiency requirements: (1) the honest prover runs in polynomial time, (2) the verifier is almost linear time (and under some conditions even sub linear), and (3) the interaction consists of only a constant number of communication rounds. Prior to this work, very little was known about the power of efficient, constant-round interactive proofs (rather than arguments). This result represents significant progress on the round complexity of interactive proofs (even if we ignore the running time of the honest prover), and on the expressive power of interactive proofs with polynomial-time honest prover (even if we ignore the round complexity). This result has several applications, and in particular it can be used for verifiable delegation of computation. Our construction leverages several new notions of interactive proofs, which may be of independent interest. One of these notions is that of unambiguous interactive proofs where the prover has a unique successful strategy. Another notion is that of probabilistically checkable interactive proofs (PCIPs) where the verifier only reads a few bits of the transcript in checking the proof (this could be viewed as an interactive extension of PCPs)."
]
} |
1706.04641 | 2626603517 | We introduce pseudo-deterministic interactive proofs (psdAM): interactive proof systems for search problems where the verifier is guaranteed with high probability to output the same output on different executions. As in the case with classical interactive proofs, the verifier is a probabilistic polynomial time algorithm interacting with an untrusted powerful prover. We view pseudo-deterministic interactive proofs as an extension of the study of pseudo-deterministic randomized polynomial time algorithms: the goal of the latter is to find canonical solutions to search problems whereas the goal of the former is to prove that a solution to a search problem is canonical to a probabilistic polynomial time verifier. Alternatively, one may think of the powerful prover as aiding the probabilistic polynomial time verifier to find canonical solutions to search problems, with high probability over the randomness of the verifier. The challenge is that pseudo-determinism should hold not only with respect to the randomness, but also with respect to the prover: a malicious prover should not be able to cause the verifier to output a solution other than the unique canonical one. | One can similarly define pseudo-deterministic @math , and pseudo-deterministic @math , where @math is a 1-round protocol, and @math is a 2-round protocol. One can show that any constant-round interactive protocol can be reduced to a 2-round interactive protocol @cite_2 . Hence, the definition of pseudo-deterministic @math captures the set of all search problems solvable in a constant number of rounds of interaction. Furthermore, in the definition of pseudo-deterministic @math , we use public coins. One can show that any protocol using private coins can be simulated using public coins In the case where the prover is not unbounded, private coins may be more powerful than public coins. @cite_3 . | {
"cite_N": [
"@cite_3",
"@cite_2"
],
"mid": [
"1984976620",
"2096390054"
],
"abstract": [
"An interactive proof system is a method by which one party of unlimited resources, called the prover, can convince a party of limited resources, call the verifier, of the truth of a proposition. The verifier may toss coins, ask repeated questions of the prover, and run efficient tests upon the prover's responses before deciding whether to be convinced. This extends the familiar proof system implicit in the notion of NP in that there the verifier may not toss coins or speak, but only listen and verify. Interactive proof systems may not yield proof in the strict mathematical sense: the \"proofs\" are probabilistic with an exponentially small, though non-zero chance of error. We consider two notions of interactive proof system. One, defined by Goldwasser, Micali, and Rackoff [GMR] permits the verifier a coin that can be tossed in private, i.e., a secret source of randomness. The Permission to copy without ice all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and or specific permission. © 1986 ACM 0-89791-193-8 86 0500 0059 $00.75 second, due to Babai, [B] requires that the outcome of the verifier's coin tosses be public and thus accessible to the prover. Our main result is that these two systems are equivalent in power with respect to language recognition. The notion of interactive proof system may be seen to yield a probabilistic analog to NP much as BPP is the probabilistic analog to P. We define the probabilistic, nondeterministic, polynomial time Turing machine and show that it is also equivalent in power to these systems.",
"In a previous paper [BS] we proved, using the elements of the theory of nilpotent groups , that some of the fundamental computational problems in matriz groups belong to NP . These problems were also shown to belong to coNP , assuming an unproven hypothesis concerning finite simple groups . The aim of this paper is to replace most of the (proven and unproven) group theory of [BS] by elementary combinatorial arguments. The result we prove is that relative to a random oracle B , the mentioned matrix group problems belong to ( NP∩coNP ) B . The problems we consider are membership in and order of a matrix group given by a list of generators. These problems can be viewed as multidimensional versions of a close relative of the discrete logarithm problem. Hence NP∩coNP might be the lowest natural complexity class they may fit in. We remark that the results remain valid for black box groups where group operations are performed by an oracle. The tools we introduce seem interesting in their own right. We define a new hierarchy of complexity classes AM ( k ) “just above NP ”, introducing Arthur vs. Merlin games , the bounded-away version of Papdimitriou's Games against Nature . We prove that in spite of their analogy with the polynomial time hierarchy, the finite levels of this hierarchy collapse to AM=AM (2). Using a combinatorial lemma on finite groups [BE], we construct a game by which the nondeterministic player (Merlin) is able to convince the random player (Arthur) about the relation [ G ]= N provided Arthur trusts conclusions based on statistical evidence (such as a Slowly-Strassen type “proof” of primality). One can prove that AM consists precisely of those languages which belong to NP B for almost every oracle B . Our hierarchy has an interesting, still unclarified relation to another hierarchy, obtained by removing the central ingredient from the User vs. Expert games of Goldwasser, Micali and Rackoff."
]
} |
1706.04641 | 2626603517 | We introduce pseudo-deterministic interactive proofs (psdAM): interactive proof systems for search problems where the verifier is guaranteed with high probability to output the same output on different executions. As in the case with classical interactive proofs, the verifier is a probabilistic polynomial time algorithm interacting with an untrusted powerful prover. We view pseudo-deterministic interactive proofs as an extension of the study of pseudo-deterministic randomized polynomial time algorithms: the goal of the latter is to find canonical solutions to search problems whereas the goal of the former is to prove that a solution to a search problem is canonical to a probabilistic polynomial time verifier. Alternatively, one may think of the powerful prover as aiding the probabilistic polynomial time verifier to find canonical solutions to search problems, with high probability over the randomness of the verifier. The challenge is that pseudo-determinism should hold not only with respect to the randomness, but also with respect to the prover: a malicious prover should not be able to cause the verifier to output a solution other than the unique canonical one. | We note that in the above protocol, the prover only needs to have the power to solve graph isomorphism (and graph non-isomorphism). Also, we note that the above protocol uses private coins. While the protocol can be simulated with a public coin protocol @cite_3 , the simulation requires the prover to be very powerful. It remains open to determine whether there is a pseudo-deterministic @math protocol for graph isomorphism which uses public coins, and uses a weak" prover (one which is a polynomial time machine with access to an oracle solving graph isomorphism). | {
"cite_N": [
"@cite_3"
],
"mid": [
"1984976620"
],
"abstract": [
"An interactive proof system is a method by which one party of unlimited resources, called the prover, can convince a party of limited resources, call the verifier, of the truth of a proposition. The verifier may toss coins, ask repeated questions of the prover, and run efficient tests upon the prover's responses before deciding whether to be convinced. This extends the familiar proof system implicit in the notion of NP in that there the verifier may not toss coins or speak, but only listen and verify. Interactive proof systems may not yield proof in the strict mathematical sense: the \"proofs\" are probabilistic with an exponentially small, though non-zero chance of error. We consider two notions of interactive proof system. One, defined by Goldwasser, Micali, and Rackoff [GMR] permits the verifier a coin that can be tossed in private, i.e., a secret source of randomness. The Permission to copy without ice all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and or specific permission. © 1986 ACM 0-89791-193-8 86 0500 0059 $00.75 second, due to Babai, [B] requires that the outcome of the verifier's coin tosses be public and thus accessible to the prover. Our main result is that these two systems are equivalent in power with respect to language recognition. The notion of interactive proof system may be seen to yield a probabilistic analog to NP much as BPP is the probabilistic analog to P. We define the probabilistic, nondeterministic, polynomial time Turing machine and show that it is also equivalent in power to these systems."
]
} |
1706.04652 | 2626073992 | We want to build robots that are useful in unstructured real world applications, such as doing work in the household. Grasping in particular is an important skill in this domain, yet it remains a challenge. One of the key hurdles is handling unexpected changes or motion in the objects being grasped and kinematic noise or other errors in the robot. This paper proposes an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object. We use a wrist-mounted sensor to acquire depth images in front of the gripper and train a convolutional neural network to learn a distance function to true grasps for grasp configurations over an image. The training sensor data is generated in simulation, a major advantage over previous work that uses real robot experience, which is costly to obtain. Despite being trained in simulation, our approach works well on real noisy sensor images. We compare our controller in simulated and real robot experiments to a strong baseline for grasp pose detection, and find that our approach significantly outperforms the baseline in the presence of kinematic noise, perceptual errors and disturbances of the object during grasping. | Recent work in grasp perception has utilized deep learning to localize grasp configurations in a way that is analogous to object detection in computer vision @cite_16 @cite_3 @cite_9 @cite_0 @cite_8 . Such methods take potentially noisy sensor data as input and produce viable grasp pose estimates as output. However, these grasp detection methods typically suffer from perceptual errors and inaccurate robot kinematics @cite_7 . In addition, extending traditional one-shot grasp perception methods to re-detect grasps in a loop while the sensor mounted on the gripper gets closer to the objects is difficult, because these approaches are trained to find grasps with large distances to the sensor (e.g., to see the entire object) @cite_7 @cite_16 @cite_0 @cite_9 . | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_16"
],
"mid": [
"2290564286",
"",
"2950988471",
"1503925285",
"2949098821",
"1999156278"
],
"abstract": [
"This paper considers the problem of grasp pose detection in point clouds. We follow a general algorithmic structure that first generates a large set of 6-DOF grasp candidates and then classifies each of them as a good or a bad grasp. Our focus in this paper is on improving the second step by using depth sensor scans from large online datasets to train a convolutional neural network. We propose two new representations of grasp candidates, and we quantify the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models. Our analysis shows that a more informative grasp candidate representation as well as pretraining and prior knowledge significantly improve grasp detection. We evaluate our approach on a Baxter Research Robot and demonstrate an average grasp success rate of 93 in dense clutter. This is a 20 improvement compared to our prior work.",
"",
"We present an accurate, real-time approach to robotic grasp detection based on convolutional neural networks. Our network performs single-stage regression to graspable bounding boxes without using standard sliding window or region proposal techniques. The model outperforms state-of-the-art approaches by 14 percentage points and runs at 13 frames per second on a GPU. Our network can simultaneously perform classification so that in a single step it recognizes the object and finds a good grasp rectangle. A modification to this model predicts multiple grasps per object by using a locally constrained prediction mechanism. The locally constrained model performs significantly better, especially on objects that can be grasped in a variety of ways.",
"We propose a new large-scale database containing grasps that are applied to a large set of objects from numerous categories. These grasps are generated in simulation and are annotated with different grasp stability metrics. We use a descriptive and efficient representation of the local object shape at which each grasp is applied. Given this data, we present a two-fold analysis: (i) We use crowdsourcing to analyze the correlation of the metrics with grasp success as predicted by humans. The results show that the metric based on physics simulation is a more consistent predictor for grasp success than the standard υ-metric. The results also support the hypothesis that human labels are not required for good ground truth grasp data. Instead the physics-metric can be used to generate datasets in simulation that may then be used to bootstrap learning in the real world. (ii) We apply a deep learning method and show that it can better leverage the large-scale database for prediction of grasp success compared to logistic regression. Furthermore, the results suggest that labels based on the physics-metric are less noisy than those from the υ-metric and therefore lead to a better classification performance.",
"Current learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.",
"We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms."
]
} |
1706.04652 | 2626073992 | We want to build robots that are useful in unstructured real world applications, such as doing work in the household. Grasping in particular is an important skill in this domain, yet it remains a challenge. One of the key hurdles is handling unexpected changes or motion in the objects being grasped and kinematic noise or other errors in the robot. This paper proposes an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object. We use a wrist-mounted sensor to acquire depth images in front of the gripper and train a convolutional neural network to learn a distance function to true grasps for grasp configurations over an image. The training sensor data is generated in simulation, a major advantage over previous work that uses real robot experience, which is costly to obtain. Despite being trained in simulation, our approach works well on real noisy sensor images. We compare our controller in simulated and real robot experiments to a strong baseline for grasp pose detection, and find that our approach significantly outperforms the baseline in the presence of kinematic noise, perceptual errors and disturbances of the object during grasping. | Visual servoing methods use visual feedback to move a camera to a target pose that depends directly on the object pose. While there are numerous methods in this area @cite_12 , only a small amount of previous work addresses using visual feedback directly for grasping @cite_14 @cite_18 @cite_1 . In contrast to our work, the existing methods require manual feature design or specification. An active vision approach by acquires sensor data from different view points to optimize surface reconstruction for reliable grasping during grasp planning @cite_17 . However, the actual grasping does not use sensor feedback. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_1",
"@cite_12",
"@cite_17"
],
"mid": [
"2143320034",
"2171066085",
"2098185782",
"1560270123",
"2561266087"
],
"abstract": [
"This paper develops an estimation framework for sensor-guided manipulation of a rigid object via a robot arm. Using an unscented Kalman Filter (UKF), the method combines dense range information (from stereo cameras and 3D ranging sensors) as well as visual appearance features and silhouettes of the object and manipulator to track both an object-fixed frame location as well as a manipulator tool or palm frame location. If available, tactile data is also incorporated. By using these different imaging sensors and different imaging properties, we can leverage the advantages of each sensor and each feature type to realize more accurate and robust object and reference frame tracking. The method is demonstrated using the DARPA ARM-S system, consisting of a Barrett™WAM manipulator.",
"Using visual feedback to control the movement of the end-effector is a common approach for robust execution of robot movements in real-world scenarios. Over the years several visual servoing algorithms have been developed and implemented for various types of robot hardware. In this paper, we present a hybrid approach which combines visual estimations with kinematically determined orientations to control the movement of a humanoid arm. The approach has been evaluated with the humanoid robot ARMAR III using the stereo system of the active head for perception as well as the torso and arms equipped with five finger hands for actuation. We show how a robust visual perception is used to control complex robots without any hand-eye calibration. Furthermore, the robustness of the system is improved by estimating the hand position in case of failed visual hand tracking due to lightning artifacts or occlusions. The proposed control scheme is based on the fusion of the sensor channels for visual perception, force measurement and motor encoder data. The combination of these different data sources results in a reactive, visually guided control that allows the robot ARMAR-III to execute grasping tasks in a real-world scenario.",
"Despite extensive research conducted in machine vision for harvesting robots, practical success in this field of agrobotics is still limited. This article presents a comprehensive review of classical and state-of-the-art machine vision solutions employed in such systems, with special emphasis on the visual cues and machine vision algorithms used. We discuss the advantages and limitations of each approach and we examine these capacities in light of the challenges ahead. We conclude with suggested directions from the general computer vision literature which could assist our research community meet these challenges and bring us closer to the goal of practical selective fruit harvesting robots.",
"The second edition of this handbook provides a state-of-the-art cover view on the various aspects in the rapidly developing field of robotics. Reaching for the human frontier, robotics is vigorously engaged in the growing challenges of new emerging domains. Interacting, exploring, and working with humans, the new generation of robots will increasingly touch people and their lives. The credible prospect of practical robots among humans is the result of the scientific endeavour of a half a century of robotic developments that established robotics as a modern scientific discipline. The ongoing vibrant expansion and strong growth of the field during the last decade has fueled this second edition of the Springer Handbook of Robotics. The first edition of the handbook soon became a landmark in robotics publishing and won the American Association of Publishers PROSE Award for Excellence in Physical Sciences & Mathematics as well as the organizations Award for Engineering & Technology. The second edition of the handbook, edited by two internationally renowned scientists with the support of an outstanding team of seven part editors and more than 200 authors, continues to be an authoritative reference for robotics researchers, newcomers to the field, and scholars from related disciplines. The contents have been restructured to achieve four main objectives: the enlargement of foundational topics for robotics, the enlightenment of design of various types of robotic systems, the extension of the treatment on robots moving in the environment, and the enrichment of advanced robotics applications. Further to an extensive update, fifteen new chapters have been introduced on emerging topics, and a new generation of authors have joined the handbooks team. A novel addition to the second edition is a comprehensive collection of multimedia references to more than 700 videos, which bring valuable insight into the contents. The videos can be viewed directly augmented into the text with a smartphone or tablet using a unique and specially designed app.",
"How should a robot direct active vision so as to ensure reliable grasping? We answer this question for the case of dexterous grasping of unfamiliar objects. By dexterous grasping we simply mean grasping by any hand with more than two fingers, such that the robot has some choice about where to place each finger. Such grasps typically fail in one of two ways, either unmodeled objects in the scene cause collisions or object reconstruction is insufficient to ensure that the grasp points provide a stable force closure. These problems can be solved more easily if active sensing is guided by the anticipated actions. Our approach has three stages. First, we take a single view and generate candidate grasps from the resulting partial object reconstruction. Second, we drive the active vision approach to maximise surface reconstruction quality around the planned contact points. During this phase, the anticipated grasp is continually refined. Third, we direct gaze to improve the safety of the planned reach to grasp trajectory. We show, on a dexterous manipulator with a camera on the wrist, that our approach (80.4 success rate) outperforms a randomised algorithm (64.3 success rate)."
]
} |
1706.04652 | 2626073992 | We want to build robots that are useful in unstructured real world applications, such as doing work in the household. Grasping in particular is an important skill in this domain, yet it remains a challenge. One of the key hurdles is handling unexpected changes or motion in the objects being grasped and kinematic noise or other errors in the robot. This paper proposes an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object. We use a wrist-mounted sensor to acquire depth images in front of the gripper and train a convolutional neural network to learn a distance function to true grasps for grasp configurations over an image. The training sensor data is generated in simulation, a major advantage over previous work that uses real robot experience, which is costly to obtain. Despite being trained in simulation, our approach works well on real noisy sensor images. We compare our controller in simulated and real robot experiments to a strong baseline for grasp pose detection, and find that our approach significantly outperforms the baseline in the presence of kinematic noise, perceptual errors and disturbances of the object during grasping. | were one of the first to incorporate deep learning for grasp perception using visual feedback @cite_5 . However, their approach requires months of training on multiple physical robots. Moreover, they require a CNN with 17 layers that must be trained from scratch. In addition, their use of a static camera makes it difficult to adapt to different grasping scenarios, e.g., a different table height or a different grasp approach direction. Because we generate training data in simulation and our CNN has only a few layers, our approach is simpler. In addition, since we mount the camera to the wrist of the robot arm, our approach is more flexible because it can be applied to any grasping scenario -- not just those with a particular configuration relative to the camera. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2293467699"
],
"abstract": [
"We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing."
]
} |
1706.04764 | 2842078607 | Representative subset selection (RSS) is an important tool for users to draw insights from massive datasets. Existing literature models RSS as the submodular maximization problem to capture the “diminishing returns” property of the representativeness of selected subsets, but often only has a single constraint (e.g., cardinality), which limits its applications in many real-world problems. To capture the data recency issue and support different types of constraints, we formulate dynamic RSS in data streams as maximizing submodular functions subject to general @math d-knapsack constraints (SMDK) over sliding windows. We propose a KnapWindow framework (KW) for SMDK. KW utilizes the KnapStream algorithm (KS) for SMDK in append-only streams as a subroutine. It maintains a sequence of checkpoints and KS instances over the sliding window. Theoretically, KW is @math 1-ɛ1+d-approximate for SMDK. Furthermore, we propose a KnapWindowPlus framework (KW @math +) to improve upon KW. KW @math + builds an index SubKnapChk to manage the checkpoints and KS instances. SubKnapChk deletes a checkpoint whenever it can be approximated by its successors. By keeping much fewer checkpoints, KW @math + achieves higher efficiency than KW while still guaranteeing a @math 1-ɛ'2+2d-approximate solution for SMDK. Finally, we evaluate the efficiency and solution quality of KW and KW @math + in real-world datasets. The experimental results demonstrate that KW achieves more than two orders of magnitude speedups over the batch baseline and preserves high-quality solutions for SMDK over sliding windows. KW @math + further runs 5-10 times faster than KW while providing solutions with equivalent or even better utilities. | Recently, we have witnessed the growth of studies in the data stream model. in append-only streams where new elements arrive continuously but old ones never expire is studied in @cite_38 @cite_24 @cite_3 @cite_16 . @cite_13 further propose a method for deletion-robust where a limited number of old elements can be deleted from the stream. However, these techniques neither support general constraints beyond cardinality nor consider the recency of selected subsets. In many scenarios, data streams are highly dynamic and evolve over time. Therefore, recent elements are more important and interesting than earlier ones. The @cite_8 model is widely adopted in many data-driven applications @cite_18 @cite_10 to capture the recency constraint. over sliding windows is still largely unexplored yet and, to the best of our knowledge, there is only one existing method @cite_40 for dynamic over sliding windows. But it is specific for the cardinality constraint. In this paper, we propose more general frameworks for than any existing ones, which work with various submodular utility functions, support @math -knapsack constraints, and maintain the representatives over sliding windows. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_8",
"@cite_10",
"@cite_3",
"@cite_24",
"@cite_40",
"@cite_16",
"@cite_13"
],
"mid": [
"112063162",
"2301494863",
"2004110412",
"2592120485",
"2158504911",
"1997959284",
"2540677289",
"2613626598",
"2741807132"
],
"abstract": [
"We generalize the graph streaming model to hypergraphs. In this streaming model, hyperedges are arriving online and any computation has to be done on-the-fly using a small amount of space. Each hyperedge can be viewed as a set of elements (nodes), so we refer to our proposed model as the “set-streaming” model of computation. We consider the problem of “maximum coverage”, in which k sets have to be selected that maximize the total weight of the covered elements. In the set-streaming model of computation, we show that our algorithm for maximumcoverage achieves an approximation factor of 14 . When multiple passes are allowed, we also provide a Θ(log n) approximation algorithm for the set-cover. We next consider a multi-topic blog-watch application, an extension of blogalert like applications for handling simultaneous multipletopic requests. We show how the problems of maximumcoverage and set-cover in the set-streaming model can be utilized to give efficient online solutions to this problem. We verify the effectiveness of our methods both on synthetic and real weblog data.",
"As the prevalence of social media and GPS-enabled devices, a massive amount of geo-textual data has been generated in a stream fashion, leading to a variety of applications such as location-based recommendation and information dissemination. In this paper, we investigate a novel real-time top-k monitoring problem over sliding window of streaming data; that is, we continuously maintain the top-k most relevant geo-textual messages (e.g., geo-tagged tweets) for a large number of spatial-keyword subscriptions (e.g., registered users interested in local events) simultaneously. To provide the most recent information under controllable memory cost, sliding window model is employed on the streaming geo-textual data. To the best of our knowledge, this is the first work to study top-k spatial-keyword publish subscribe over sliding window. A novel system, called Skype (Top-k Spatial-keyword Publish Subscribe), is proposed in this paper. In Skype, to continuously maintain top-k results for massive subscriptions, we devise a novel indexing structure upon subscriptions such that each incoming message can be immediately delivered on its arrival. Moreover, to reduce the expensive top-k re-evaluation cost triggered by message expiration, we develop a novel cost-based k-skyband technique to reduce the number of re-evaluations in a cost-effective way. Extensive experiments verify the great efficiency and effectiveness of our proposed techniques.",
"We consider the problem of maintaining aggregates and statistics over data streams, with respect to the last N data elements seen so far. We refer to this model as the sliding window model. We consider the following basic problem: Given a stream of bits, maintain a count of the number of 1's in the last N elements seen from the stream. We show that, using @math bits of memory, we can estimate the number of 1's to within a factor of @math . We also give a matching lower bound of @math memory bits for any deterministic or randomized algorithms. We extend our scheme to maintain the sum of the last N positive integers and provide matching upper and lower bounds for this more general problem as well. We also show how to efficiently compute the Lp norms ( @math ) of vectors in the sliding window model using our techniques. Using our algorithm, one can adapt many other techniques to work for the sliding window model with a multiplicative overhead of @math in memory and a @math factor loss in accuracy. These include maintaining approximate histograms, hash tables, and statistics or aggregates such as sum and averages.",
"Influence maximization (IM), which selects a set of k users (called seeds) to maximize the influence spread over a social network, is a fundamental problem in a wide range of applications such as viral marketing and network monitoring. Existing IM solutions fail to consider the highly dynamic nature of social influence, which results in either poor seed qualities or long processing time when the network evolves. To address this problem, we define a novel IM query named Stream Influence Maximization (SIM) on social streams. Technically, SIM adopts the sliding window model and maintains a set of k seeds with the largest influence value over the most recent social actions. Next, we propose the Influential Checkpoints (IC) framework to facilitate continuous SIM query processing. The IC framework creates a checkpoint for each window shift and ensures an e-approximate solution. To improve its efficiency, we further devise a Sparse Influential Checkpoints (SIC) framework which selectively keeps O(logN β checkpoints for a sliding window of size N and maintains an e(1−β) 2-approximate solution. Experimental results on both real-world and synthetic datasets confirm the effectiveness and efficiency of our proposed frameworks against the state-of-the-art IM approaches.",
"We consider the problem of extracting informative exemplars from a data stream. Examples of this problem include exemplar-based clustering and nonparametric inference such as Gaussian process regression on massive data sets. We show that these problems require maximization of a submodular function that captures the informativeness of a set of exemplars, over a data stream. We develop an efficient algorithm, Stream-Greedy, which is guaranteed to obtain a constant fraction of the value achieved by the optimal solution to this NP-hard optimization problem. We extensively evaluate our algorithm on large real-world data sets.",
"How can one summarize a massive data set \"on the fly\", i.e., without even having seen it in its entirety? In this paper, we address the problem of extracting representative elements from a large stream of data. I.e., we would like to select a subset of say k data points from the stream that are most representative according to some objective function. Many natural notions of \"representativeness\" satisfy submodularity, an intuitive notion of diminishing returns. Thus, such problems can be reduced to maximizing a submodular set function subject to a cardinality constraint. Classical approaches to submodular maximization require full access to the data set. We develop the first efficient streaming algorithm with constant factor 1 2-e approximation guarantee to the optimum solution, requiring only a single pass through the data, and memory independent of data size. In our experiments, we extensively evaluate the effectiveness of our approach on several applications, including training large-scale kernel methods and exemplar-based clustering, on millions of data points. We observe that our streaming method, while achieving practically the same utility value, runs about 100 times faster than previous work.",
"Maximizing submodular functions under cardinality constraints lies at the core of numerous data mining and machine learning applications, including data diversification, data summarization, and coverage problems. In this work, we study this question in the context of data streams, where elements arrive one at a time, and we want to design low-memory and fast update-time algorithms that maintain a good solution. Specifically, we focus on the sliding window model, where we are asked to maintain a solution that considers only the last W items. In this context, we provide the first non-trivial algorithm that maintains a provable approximation of the optimum using space sublinear in the size of the window. In particular we give a 1 3 - e approximation algorithm that uses space polylogarithmic in the spread of the values of the elements, δ, and linear in the solution size k for any constant e > 0. At the same time, processing each element only requires a polylogarithmic number of evaluations of the function itself. When a better approximation is desired, we show a different algorithm that, at the cost of using more memory, provides a 1 2 - e approximation, and allows a tunable trade-off between average update time and space. This algorithm matches the best known approximation guarantees for submodular optimization in insertion-only streams, a less general formulation of the problem. We demonstrate the efficacy of the algorithms on a number of real world datasets, showing that their practical performance far exceeds the theoretical bounds. The algorithms preserve high quality solutions in streams with millions of items, while storing a negligible fraction of them.",
"Today's social platforms, such as Twitter and Facebook, continuously generate massive volumes of data. The resulting data streams exceed any reasonable limit for permanent storage, especially since data is often redundant, overlapping, sparse, and generally of low value. This calls for means to retain solely a small fraction of the data in an online manner. In this paper, we propose techniques to effectively decide which data to retain, such that the induced loss of information, the regret of neglecting certain data, is minimized. These techniques enable not only efficient processing of massive streaming data, but are also adaptive and address the dynamic nature of social media. Experiments on large-scale real-world datasets illustrate the feasibility of our approach in terms of both, runtime and information quality.",
"How can we summarize a dynamic data stream when elements selected for the summary can be deleted at any time? This is an important challenge in online services, where the users generating the data may decide to exercise their right to restrict the service provider from using (part of) their data due to privacy concerns. Motivated by this challenge, we introduce the dynamic deletion-robust submodular maximization problem. We develop the first resilient streaming algorithm, called ROBUST-STREAMING, with a constant factor approximation guarantee to the optimum solution. We evaluate the effectiveness of our approach on several real-world applications, including summarizing (1) streams of geo-coordinates (2); streams of images; and (3) click-stream log data, consisting of 45 million feature vectors from a news recommendation task."
]
} |
1706.04764 | 2842078607 | Representative subset selection (RSS) is an important tool for users to draw insights from massive datasets. Existing literature models RSS as the submodular maximization problem to capture the “diminishing returns” property of the representativeness of selected subsets, but often only has a single constraint (e.g., cardinality), which limits its applications in many real-world problems. To capture the data recency issue and support different types of constraints, we formulate dynamic RSS in data streams as maximizing submodular functions subject to general @math d-knapsack constraints (SMDK) over sliding windows. We propose a KnapWindow framework (KW) for SMDK. KW utilizes the KnapStream algorithm (KS) for SMDK in append-only streams as a subroutine. It maintains a sequence of checkpoints and KS instances over the sliding window. Theoretically, KW is @math 1-ɛ1+d-approximate for SMDK. Furthermore, we propose a KnapWindowPlus framework (KW @math +) to improve upon KW. KW @math + builds an index SubKnapChk to manage the checkpoints and KS instances. SubKnapChk deletes a checkpoint whenever it can be approximated by its successors. By keeping much fewer checkpoints, KW @math + achieves higher efficiency than KW while still guaranteeing a @math 1-ɛ'2+2d-approximate solution for SMDK. Finally, we evaluate the efficiency and solution quality of KW and KW @math + in real-world datasets. The experimental results demonstrate that KW achieves more than two orders of magnitude speedups over the batch baseline and preserves high-quality solutions for SMDK over sliding windows. KW @math + further runs 5-10 times faster than KW while providing solutions with equivalent or even better utilities. | ( ) has been extensively studied in recent years. Due to its theoretical consequences, is seen as a silver bullet'' for many different applications @cite_27 @cite_22 @cite_15 @cite_28 @cite_5 . Here, we focus on reviewing existing literature on that is closely related to our paper: and in data streams. Sviridenko @cite_19 and @cite_11 first propose approximation algorithms for subject to @math -knapsack and @math -knapsack constraints respectively. Both algorithms have high-order polynomial time complexity and are not scalable to massive datasets. More efficient algorithms for subject to @math -knapsack constraints are proposed in @cite_22 @cite_21 and @cite_7 respectively. These algorithms cannot be applied to directly. @cite_24 and @cite_29 propose the algorithms for with cardinality constraints in append-only streams with sublinear time complexity. Then, @cite_35 propose an algorithm for in append-only streams with @math -knapsack constraints. @cite_27 propose an algorithm for in append-only streams. More recently, there are a few attempts at over sliding windows. @cite_40 propose an algorithm for over sliding windows with cardinality constraints. To the best of our knowledge, there is no existing literature on over sliding windows yet. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_7",
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_40",
"@cite_5",
"@cite_15",
"@cite_11"
],
"mid": [
"",
"2141403143",
"2252172643",
"2614040945",
"2101246692",
"2144933361",
"1997959284",
"2033885045",
"2611872949",
"2540677289",
"",
"",
"123178497"
],
"abstract": [
"",
"Given a water distribution network, where should we place sensors toquickly detect contaminants? Or, which blogs should we read to avoid missing important stories?. These seemingly different problems share common structure: Outbreak detection can be modeled as selecting nodes (sensor locations, blogs) in a network, in order to detect the spreading of a virus or information asquickly as possible. We present a general methodology for near optimal sensor placement in these and related problems. We demonstrate that many realistic outbreak detection objectives (e.g., detection likelihood, population affected) exhibit the property of \"submodularity\". We exploit submodularity to develop an efficient algorithm that scales to large problems, achieving near optimal placements, while being 700 times faster than a simple greedy algorithm. We also derive online bounds on the quality of the placements obtained by any algorithm. Our algorithms and bounds also handle cases where nodes (sensor locations, blogs) have different costs. We evaluate our approach on several large real-world problems,including a model of a water distribution network from the EPA, andreal blog data. The obtained sensor placements are provably near optimal, providing a constant fraction of the optimal solution. We show that the approach scales, achieving speedups and savings in storage of several orders of magnitude. We also show how the approach leads to deeper insights in both applications, answering multicriteria trade-off, cost-sensitivity and generalization questions.",
"There has been much progress recently on improved approximations for problems involving submodular objective functions, and many interesting techniques have been developed. However, the resulting algorithms are often slow and impractical. In this paper we develop algorithms that match the best known approximation guarantees, but with significantly improved running times, for maximizing a monotone submodular function f: 2[n] → R+ subject to various constraints. As in previous work, we measure the number of oracle calls to the objective function which is the dominating term in the running time. Our first result is a simple algorithm that gives a (1--1 e -- e)-approximation for a cardinality constraint using O(n e log n e) queries, and a 1 (p + 2e + 1 + e)-approximation for the intersection of a p-system and e knapsack (linear) constraints using O (n e2 log2 n e) queries. This is the first approximation for a p-system combined with linear constraints. (We also show that the factor of p cannot be improved for maximizing over a p-system.) The main idea behind these algorithms serves as a building block in our more sophisticated algorithms. Our main result is a new variant of the continuous greedy algorithm, which interpolates between the classical greedy algorithm and a truly continuous algorithm. We show how this algorithm can be implemented for matroid and knapsack constraints using O(n2) oracle calls to the objective function. (Previous variants and alternative techniques were known to use at least O(n4) oracle calls.) This leads to an O(n2 e4 log2 n e)-time (1--1 e -- e)-approximation for a matroid constraint. For a knapsack constraint, we develop a more involved (1--1 e -- e)-approximation algorithm that runs in time O(n2(1 e log n)poly(1 e)).",
"Social influence has attracted significant attention owing to the prevalence of social networks (SNs). In this paper, we study a new social influence problem, called personalized social tags exploration (PITEX), to help any user in the SN explore how she influences the network. Given a target user, it finds a size-k tag set that maximizes this user's social influence. We prove the problem is NP-hard to be approximated within any constant ratio. To solve it, we introduce a sampling-based framework, which has an approximation ratio of 1-e over 1+e with high probabilistic guarantee. To speedup the computation, we devise more efficient sampling techniques and propose best-effort exploration to quickly prune tag sets with small influence. To further enable instant exploration, we devise a novel index structure and develop effective pruning and materialization techniques. Experimental results on real large-scale datasets validate our theoretical findings and show high performances of our proposed methods.",
"Greedy algorithms are practitioners' best friends - they are intuitive, simple to implement, and often lead to very good solutions. However, implementing greedy algorithms in a distributed setting is challenging since the greedy choice is inherently sequential, and it is not clear how to take advantage of the extra processing power. Our main result is a powerful sampling technique that aids in parallelization of sequential algorithms. We then show how to use this primitive to adapt a broad class of greedy algorithms to the MapReduce paradigm; this class includes maximum cover and submodular maximization subject to p-system constraints. Our method yields efficient algorithms that run in a logarithmic number of rounds, while obtaining solutions that are arbitrarily close to those produced by the standard sequential greedy algorithm. We begin with algorithms for modular maximization subject to a matroid constraint, and then extend this approach to obtain approximation algorithms for submodular maximization subject to knapsack or p-system constraints. Finally, we empirically validate our algorithms, and show that they achieve the same quality of the solution as standard greedy algorithms but run in a substantially fewer number of rounds.",
"We design a class of submodular functions meant for document summarization tasks. These functions each combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity. Critically, our functions are monotone nondecreasing and submodular, which means that an efficient scalable greedy optimization scheme has a constant factor guarantee of optimality. When evaluated on DUC 2004-2007 corpora, we obtain better than existing state-of-art results in both generic and query-focused document summarization. Lastly, we show that several well-established methods for document summarization correspond, in fact, to submodular function optimization, adding further evidence that submodular functions are a natural fit for document summarization.",
"How can one summarize a massive data set \"on the fly\", i.e., without even having seen it in its entirety? In this paper, we address the problem of extracting representative elements from a large stream of data. I.e., we would like to select a subset of say k data points from the stream that are most representative according to some objective function. Many natural notions of \"representativeness\" satisfy submodularity, an intuitive notion of diminishing returns. Thus, such problems can be reduced to maximizing a submodular set function subject to a cardinality constraint. Classical approaches to submodular maximization require full access to the data set. We develop the first efficient streaming algorithm with constant factor 1 2-e approximation guarantee to the optimum solution, requiring only a single pass through the data, and memory independent of data size. In our experiments, we extensively evaluate the effectiveness of our approach on several applications, including training large-scale kernel methods and exemplar-based clustering, on millions of data points. We observe that our streaming method, while achieving practically the same utility value, runs about 100 times faster than previous work.",
"In this paper, we obtain an (1-e^-^1)-approximation algorithm for maximizing a nondecreasing submodular set function subject to a knapsack constraint. This algorithm requires O(n^5) function value computations.",
"Submodular maximization problems belong to the family of combinatorial optimization problems and enjoy wide applications. In this paper, we focus on the problem of maximizing a monotone submodular function subject to a d-knapsack constraint, for which we propose a streaming algorithm that achieves a (1 1+2D − e)-approximation of the optimal value, while it only needs one single pass through the dataset without storing all the data in the memory. In our experiments, we extensively evaluate the effectiveness of our proposed algorithm via an application in scientific literature recommendation. It is observed that the proposed streaming algorithm achieves both execution speedup and memory saving by several orders of magnitude, compared with existing approaches.",
"Maximizing submodular functions under cardinality constraints lies at the core of numerous data mining and machine learning applications, including data diversification, data summarization, and coverage problems. In this work, we study this question in the context of data streams, where elements arrive one at a time, and we want to design low-memory and fast update-time algorithms that maintain a good solution. Specifically, we focus on the sliding window model, where we are asked to maintain a solution that considers only the last W items. In this context, we provide the first non-trivial algorithm that maintains a provable approximation of the optimum using space sublinear in the size of the window. In particular we give a 1 3 - e approximation algorithm that uses space polylogarithmic in the spread of the values of the elements, δ, and linear in the solution size k for any constant e > 0. At the same time, processing each element only requires a polylogarithmic number of evaluations of the function itself. When a better approximation is desired, we show a different algorithm that, at the cost of using more memory, provides a 1 2 - e approximation, and allows a tunable trade-off between average update time and space. This algorithm matches the best known approximation guarantees for submodular optimization in insertion-only streams, a less general formulation of the problem. We demonstrate the efficacy of the algorithms on a number of real world datasets, showing that their practical performance far exceeds the theoretical bounds. The algorithms preserve high quality solutions in streams with millions of items, while storing a negligible fraction of them.",
"",
"",
"The concept of submodularity plays a vital role in combinatorial optimization. In particular, many important optimization problems can be cast as submodular maximization problems, including maximum coverage, maximum facility location and max cut in directed undirected graphs. In this paper we present the first known approximation algorithms for the problem of maximizing a nondecreasing submodular set function subject to multiple linear constraints. Given a d-dimensional budget vector [EQUATION], for some d ≥ 1, and an oracle for a non-decreasing submodular set function f over a universe U, where each element e ∈ U is associated with a d-dimensional cost vector, we seek a subset of elements S ⊆ U whose total cost is at most [EQUATION], such that f(S) is maximized. We develop a framework for maximizing submodular functions subject to d linear constraints that yields a (1 - e)(1 - e−1)-approximation to the optimum for any e > 0, where d > 1 is some constant. Our study is motivated by a variant of the classical maximum coverage problem that we call maximum coverage with multiple packing constraints. We use our framework to obtain the same approximation ratio for this problem. To the best of our knowledge, this is the first time the theoretical bound of 1 - e−1 is (almost) matched for both of these problems."
]
} |
1706.04971 | 2625506839 | This paper explores the information-theoretic measure entropy to detect metaphoric change, transferring ideas from hypernym detection to research on language change. We also build the first diachronic test set for German as a standard for metaphoric change annotation. Our model shows high performance, is unsupervised, language-independent and generalizable to other processes of semantic change. | Finally, research on synchronic metaphor identification has applied a wide range of approaches, including binary classification relying on standard distributional similarity @cite_3 , text cohesion measures @cite_4 , classification relying on abstractness cues @cite_1 @cite_2 or cross-lingual information @cite_5 , and soft clustering @cite_0 , among others. As to our knowledge, no previous work has explicitly exploited the idea of generalization (via hypernymy models) in metaphor detection yet. | {
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5"
],
"mid": [
"2100258351",
"2080404274",
"283041944",
"",
"2467405467",
"2126530744"
],
"abstract": [
"We present the context-theoretic framework, which provides a set of rules for the nature of composition of meaning based on the philosophy of meaning as context. Principally, in the framework the composition of the meaning of words can be represented as multiplication of their representative vectors, where multiplication is distributive with respect to the vector space. We discuss the applicability of the framework to a range of techniques in natural language processing, including subsequence matching, the lexical entailment model of (2005), vector-based representations of taxonomies, statistical parsing and the representation of uncertainty in logical semantics.",
"This paper presents a new statistical method for detecting and tracking changes in word meaning, based on Latent Semantic Analysis. By comparing the density of semantic vector clusters this method allows researchers to make statistical inferences on questions such as whether the meaning of a word changed across time or if a phonetic cluster is associated with a specific meaning. Possible applications of this method are then illustrated in tracing the semantic change of 'dog', 'do', and 'deer' in early English and examining and comparing phonaesthemes.",
"1. Introduction 2. An Outline of Grammatical Evolution 3. Some Cognitive Abilities of Animals 4. On Pidgins and Other Restricted Linguistic Systems 5. Clause Subordination 6. On The Rise of Recursion 7. Early Language References Subject Index",
"",
"",
"We show that it is possible to reliably discriminate whether a syntactic construction is meant literally or metaphorically using lexical semantic features of the words that participate in the construction. Our model is constructed using English resources, and we obtain state-of-the-art performance relative to previous work in this language. Using a model transfer approach by pivoting through a bilingual dictionary, we show our model can identify metaphoric expressions in other languages. We provide results on three new test sets in Spanish, Farsi, and Russian. The results support the hypothesis that metaphors are conceptual, rather than lexical, in nature."
]
} |
1706.04979 | 2625502246 | In this paper we describe a system for visualizing and analyzing worldwide research topics, rtopmap . We gather data from google scholar academic research profiles, putting together a weighted topics graph, consisting of over 35,000 nodes and 646,000 edges. The nodes correspond to self-reported research topics, and edges correspond to co-occurring topics in google scholar profiles. The rtopmap system supports zooming panning searching and other google-maps-based interactive features. With the help of map overlays, we also visualize the strengths and weaknesses of different academic institutions in terms of human resources (e.g., number of researchers in different areas), as well as scholarly output (e.g., citation counts in different areas). Finally, we also visualize what parts of the map are associated with different academic departments, or with specific documents (such as research papers, or calls for proposals). The system itself is available at this http URL . | Microsoft's Academic Graph database has 50,000 fields of study (FOS) @cite_20 . Three levels of relationships are present among the fields, although field importance is not measured or quantified. A FOS score based on researcher and citation counts has been proposed for computer science @cite_29 . Hug analyze FOS and report that they tend to be dynamic and too specific, while field hierarchies are incoherent @cite_4 . Liu @cite_60 use a hierarchical latent tree model (HLTM) to extract a set of hierarchical topics to summarize a corpus at different levels of abstraction. In HLTM, a topic is determined by identifying words that appear with high frequency in the topic and with low frequency in other topics. Yang @cite_8 use a HLTM in their visual analytics system, VISTopic. Mane and B "orner @cite_25 visualize 50 frequent and bursty words in their analysis of publication of the Proceeding of the National Academy of Sciences. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_60",
"@cite_29",
"@cite_25",
"@cite_20"
],
"mid": [
"2522876520",
"",
"2179316363",
"2612431485",
"2134866961",
"1932742904"
],
"abstract": [
"We explore if and how Microsoft Academic (MA) could be used for bibliometric analyses. First, we examine the Academic Knowledge API (AK API), an interface to access MA data, and compare it to Google Scholar (GS). Second, we perform a comparative citation analysis of researchers by normalizing data from MA and Scopus. We find that MA offers structured and rich metadata, which facilitates data retrieval, handling and processing. In addition, the AK API allows retrieving frequency distributions of citations. We consider these features to be a major advantage of MA over GS. However, we identify four main limitations regarding the available metadata. First, MA does not provide the document type of a publication. Second, the \"fields of study\" are dynamic, too specific and field hierarchies are incoherent. Third, some publications are assigned to incorrect years. Fourth, the metadata of some publications did not include all authors. Nevertheless, we show that an average-based indicator (i.e. the journal normalized citation score; JNCS) as well as a distribution-based indicator (i.e. percentile rank classes; PR classes) can be calculated with relative ease using MA. Hence, normalization of citation counts is feasible with MA. The citation analyses in MA and Scopus yield uniform results. The JNCS and the PR classes are similar in both databases, and, as a consequence, the evaluation of the researchers' publication impact is congruent in MA and Scopus. Given the fast development in the last year, we postulate that MA has the potential to be used for full-fledged bibliometric analyses.",
"",
"In the LDA approach to topic detection, a topic is determined by identifying the words that are used with high frequency when writing about the topic. However, high frequency words in one topic may be also used with high frequency in other topics. Thus they may not be the best words to characterize the topic. In this paper, we propose a new method for topic detection, where a topic is determined by identifying words that appear with high frequency in the topic and low frequency in other topics. We model patterns of word co- occurrence and co-occurrences of those patterns using a hierarchy of discrete latent variables. The states of the latent variables represent clusters of documents and they are interpreted as topics. The words that best distinguish a cluster from other clusters are selected to characterize the topic. Empirical results show that the new method yields topics with clearer thematic characterizations than the alternative approaches.",
"Research in Computer Science (CS) evolves rapidly in a dynamic fashion. New research area may emerge and attract researchers, while older areas may have lesser interest from researchers. Studying how trends evolve in CS can be interesting from several dimensions. Furthermore, it can be used to craft research agendas. In this paper, we present trend analysis on research area in CS. We also look at citation trend analysis. Our analysis is performed using the Microsoft Academic Graph dataset. We propose the FoS score to measure the level of interest in any particular research area or topic. We apply the FoS score to investigate general publication trends, citation trends, evolution of research areas, and relation between research areas in CS.",
"Scientific research is highly dynamic. New areas of science continually evolve; others gain or lose importance, merge, or split. Due to the steady increase in the number of scientific publications, it is hard to keep an overview of the structure and dynamic development of one's own field of science, much less all scientific domains. However, knowledge of \"hot\" topics, emergent research frontiers, or change of focus in certain areas is a critical component of resource allocation decisions in research laboratories, governmental institutions, and corporations. This paper demonstrates the utilization of Kleinberg's burst detection algorithm, co-word occurrence analysis, and graph layout techniques to generate maps that support the identification of major research topics and trends. The approach was applied to analyze and map the complete set of papers published in PNAS in the years 1982-2001. Six domain experts examined and commented on the resulting maps in an attempt to reconstruct the evolution of major research areas covered by PNAS.",
"In this paper we describe a new release of a Web scale entity graph that serves as the backbone of Microsoft Academic Service (MAS), a major production effort with a broadened scope to the namesake vertical search engine that has been publicly available since 2008 as a research prototype. At the core of MAS is a heterogeneous entity graph comprised of six types of entities that model the scholarly activities: field of study, author, institution, paper, venue, and event. In addition to obtaining these entities from the publisher feeds as in the previous effort, we in this version include data mining results from the Web index and an in-house knowledge base from Bing, a major commercial search engine. As a result of the Bing integration, the new MAS graph sees significant increase in size, with fresh information streaming in automatically following their discoveries by the search engine. In addition, the rich entity relations included in the knowledge base provide additional signals to disambiguate and enrich the entities within and beyond the academic domain. The number of papers indexed by MAS, for instance, has grown from low tens of millions to 83 million while maintaining an above 95 accuracy based on test data sets derived from academic activities at Microsoft Research. Based on the data set, we demonstrate two scenarios in this work: a knowledge driven, highly interactive dialog that seamlessly combines reactive search and proactive suggestion experience, and a proactive heterogeneous entity recommendation."
]
} |
1706.04979 | 2625502246 | In this paper we describe a system for visualizing and analyzing worldwide research topics, rtopmap . We gather data from google scholar academic research profiles, putting together a weighted topics graph, consisting of over 35,000 nodes and 646,000 edges. The nodes correspond to self-reported research topics, and edges correspond to co-occurring topics in google scholar profiles. The rtopmap system supports zooming panning searching and other google-maps-based interactive features. With the help of map overlays, we also visualize the strengths and weaknesses of different academic institutions in terms of human resources (e.g., number of researchers in different areas), as well as scholarly output (e.g., citation counts in different areas). Finally, we also visualize what parts of the map are associated with different academic departments, or with specific documents (such as research papers, or calls for proposals). The system itself is available at this http URL . | Words from paper titles have also been used as indicators for the content of a research topic, and visualizations based on this approach have been studied @cite_21 @cite_55 @cite_7 . Many earlier approaches focus on analyzing specific journals, conferences, or research areas, e.g., analyzing computer science conferences and journals @cite_55 , trends in computer science research @cite_29 , the International Conference on Data Mining (ICDM) @cite_13 , publications in data visualization @cite_15 . Domenico @cite_22 quantify attractive topics (i.e., topics that attract researchers from different areas). Sun @cite_51 build a network, with computer science conferences as nodes and edges between two conferences with common authors. Map-based visualization has been used for document visualization @cite_1 @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_29",
"@cite_55",
"@cite_21",
"@cite_1",
"@cite_51",
"@cite_15",
"@cite_13"
],
"mid": [
"2040947111",
"2963781562",
"",
"2612431485",
"",
"2028663150",
"2141018272",
"",
"1928454594",
"1980270372"
],
"abstract": [
"A cartographic approach to mapping nongeographic information helps to manage graphic complexity in visualizations. It aids domain comprehension by forcing us to use the same cognitive skills we use when viewing geographic maps. The author presents a distinctly cartographic approach to mapping nongeographic information. Focusing on the text content of a set of conference abstracts, we can derive 2D visualizations of information spaces that address complexity and automation.",
"Academic research is driven by several factors causing different disciplines to act as “sources” or “sinks” of knowledge. However, how the flow of authors’ research interests – a proxy of human knowledge – evolved across time is still poorly understood. Here, we build a comprehensive map of such flows across one century, revealing fundamental periods in the raise of interest in areas of human knowledge. We identify and quantify the most attractive topics over time, when a relatively significant number of researchers moved from their original area to another one, causing what we call a “diaspora of the knowledge” towards sinks of scientific interest, and we relate these points to crucial historical and political events. Noticeably, only a few areas – like Medicine, Physics or Chemistry – mainly act as sources of the diaspora, whereas areas like Material Science, Chemical Engineering, Neuroscience, Immunology and Microbiology or Environmental Science behave like sinks.",
"",
"Research in Computer Science (CS) evolves rapidly in a dynamic fashion. New research area may emerge and attract researchers, while older areas may have lesser interest from researchers. Studying how trends evolve in CS can be interesting from several dimensions. Furthermore, it can be used to craft research agendas. In this paper, we present trend analysis on research area in CS. We also look at citation trend analysis. Our analysis is performed using the Microsoft Academic Graph dataset. We propose the FoS score to measure the level of interest in any particular research area or topic. We apply the FoS score to investigate general publication trends, citation trends, evolution of research areas, and relation between research areas in CS.",
"",
"Mapping of science and technology can be done at different levels of aggregation, using a variety of methods. In this paper, we propose a method in which title words are used as indicators for the content of a research topic, and cited references are used as the context in which words get their meaning. Research topics are represented by sets of papers that are similar in terms of these word-reference combinations. In this way we use words without neglecting differences and changes in their meanings. The method has several advantages, such as high coverage of publications. As an illustration we apply the method to produce knowledge maps of information science.",
"The paper describes an approach to IV that involves spatializing text content for enhanced visual browsing and analysis. The application arena is large text document corpora such as digital libraries, regulations and procedures, archived reports, etc. The basic idea is that text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The spatial representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts mental workload. The result is an interaction with text that more nearly resembles perception and action with the natural world than with the abstractions of written language.",
"",
"The exploration and analysis of scientific literature collections is an important task for effective knowledge management. Past interest in such document sets has spurred the development of numerous visualization approaches for their interactive analysis. They either focus on the textual content of publications, or on document metadata including authors and citations. Previously presented approaches for citation analysis aim primarily at the visualization of the structure of citation networks and their exploration. We extend the state-of-the-art by presenting an approach for the interactive visual analysis of the contents of scientific documents, and combine it with a new and flexible technique to analyze their citations. This technique facilitates user-steered aggregation of citations which are linked to the content of the citing publications using a highly interactive visualization approach. Through enriching the approach with additional interactive views of other important aspects of the data, we support the exploration of the dataset over time and enable users to analyze citation patterns, spot trends, and track long-term developments. We demonstrate the strengths of our approach through a use case and discuss it based on expert user feedback.",
"The trends in a research field, especially changes in the features over the years, are subjects of interest for many researchers. This paper reports an exploratory analysis of the changes of research topics in an academic field. The target data of the analysis are the author-keywords included in papers presented at a series of academic conferences, IEEE International Conference on Data Mining (ICDM). The analysis process consists of three phases: (1) frequency of keywords, (2) appearance of keywords in papers, and (3) relationships among keywords. In phase 1, bar charts were used to observe the ranking of frequencies. In phases 2 and 3, anchored maps were adopted. The anchored maps are based on the spring-embedder model, but they provide viewpoints by using fixed \"anchors.\" The analysis process revealed the major topics in the field of data mining and some changes in the relationships among topics."
]
} |
1706.04979 | 2625502246 | In this paper we describe a system for visualizing and analyzing worldwide research topics, rtopmap . We gather data from google scholar academic research profiles, putting together a weighted topics graph, consisting of over 35,000 nodes and 646,000 edges. The nodes correspond to self-reported research topics, and edges correspond to co-occurring topics in google scholar profiles. The rtopmap system supports zooming panning searching and other google-maps-based interactive features. With the help of map overlays, we also visualize the strengths and weaknesses of different academic institutions in terms of human resources (e.g., number of researchers in different areas), as well as scholarly output (e.g., citation counts in different areas). Finally, we also visualize what parts of the map are associated with different academic departments, or with specific documents (such as research papers, or calls for proposals). The system itself is available at this http URL . | Also related to our work are many of the graph visualization techniques and tools. Graph layout algorithms are also provided in several libraries, such as GraphViz @cite_34 , OGDF @cite_17 , MSAGL @cite_27 , and VTK @cite_30 , which however, do not support interaction, navigation, and data manipulation. Visualization toolkits such as Prefuse @cite_14 , Tulip @cite_42 , Gephi @cite_54 , and yEd @cite_6 support visual graph manipulation, and while they can handle large graphs, their rendering does not: even for graphs with a few thousand vertices, the amount of information rendered statically on the screen makes the visualization difficult to use. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_54",
"@cite_42",
"@cite_6",
"@cite_27",
"@cite_34",
"@cite_17"
],
"mid": [
"2099973502",
"2165741325",
"2125910575",
"36570781",
"1553171563",
"",
"",
"2408539216"
],
"abstract": [
"We introduce basic concepts behind the Visualization Toolkit (VTK). An overview of the system, plus some detailed examples, will assist in learning this system. The tutorial targets researchers of any discipline who have 2D or 3D data and want more control over the visualization process than a turn-key system can provide. It also assists developers who would like to incorporate VTK into an application as a visualization or data processing engine.",
"Although information visualization (infovis) technologies have proven indispensable tools for making sense of complex data, wide-spread deployment has yet to take hold, as successful infovis applications are often difficult to author and require domain-specific customization. To address these issues, we have created prefuse, a software framework for creating dynamic visualizations of both structured and unstructured data. prefuse provides theoretically-motivated abstractions for the design of a wide range of visualization applications, enabling programmers to string together desired components quickly to create and customize working visualizations. To evaluate prefuse we have built both existing and novel visualizations testing the toolkit's flexibility and performance, and have run usability studies and usage surveys finding that programmers find the toolkit usable and effective.",
"Gephi is an open source software for graph and network analysis. It uses a 3D render engine to display large networks in real-time and to speed up the exploration. A flexible and multi-task architecture brings new possibilities to work with complex data sets and produce valuable visual results. We present several key features of Gephi in the context of interactive exploration and interpretation of networks. It provides easy and broad access to network data and allows for spatializing, filtering, navigating, manipulating and clustering. Finally, by presenting dynamic features of Gephi, we highlight key aspects of dynamic network visualization.",
"Tulip is an information visualization framework dedicated to the analysis and visualization of relational data. Based on a decade of research and development of this framework, we present the architecture, consisting of a suite of tools and techniques, that can be used to address a large variety of domain-specific problems. With Tulip, we aim to provide the developer with a complete library, supporting the design of interactive information visualization applications for relational data that can be tailored to the problems he or she is addressing. The current framework enables the development of algorithms, visual encodings, interaction techniques, data models, and domain-specific visualizations. The software model facilitates the reuse of components and allows the developers to focus on programming their application. This development pipeline makes the framework efficient for research prototyping as well as the development of end-user applications.",
"yFiles is a Java-based library for the visualization and automatic layout of graph structures. Included features are data structures, graph algorithms, diverse layout and labeling algorithms and a graph viewer component.",
"",
"",
"Cluster Layout Planarity testing Booth Lueker and Boyer Myrvold Cluster () Upward () Customizable planarization method Edge insertion (fixed & variable embedding) Crossing Minimization optimal, minor-monotone, simultaneous Orthogonal layout Compaction (constructive + improvement) Customizable Sugiyama layout Energy-based layout (FM, ...) Tree-, Circular-, Balloon-layout, ..."
]
} |
1706.04979 | 2625502246 | In this paper we describe a system for visualizing and analyzing worldwide research topics, rtopmap . We gather data from google scholar academic research profiles, putting together a weighted topics graph, consisting of over 35,000 nodes and 646,000 edges. The nodes correspond to self-reported research topics, and edges correspond to co-occurring topics in google scholar profiles. The rtopmap system supports zooming panning searching and other google-maps-based interactive features. With the help of map overlays, we also visualize the strengths and weaknesses of different academic institutions in terms of human resources (e.g., number of researchers in different areas), as well as scholarly output (e.g., citation counts in different areas). Finally, we also visualize what parts of the map are associated with different academic departments, or with specific documents (such as research papers, or calls for proposals). The system itself is available at this http URL . | There are research papers that describe interactive multi-level interfaces for exploring large graphs such as ASK-GraphView @cite_16 , topological fisheye views @cite_46 , and Grokker @cite_52 . Software applications such as Pajek @cite_0 for social networks, and Cytoscape @cite_49 for biological data provide limited support for multi-level network visualization. These approaches rely on meta-graphs, made out of meta-vertices and meta-edges, which make interactions such as semantic zooming, searching, and navigation counter-intuitive. Not many of the tools and systems above provide browser-level navigation and interaction for large graphs. | {
"cite_N": [
"@cite_52",
"@cite_0",
"@cite_49",
"@cite_46",
"@cite_16"
],
"mid": [
"",
"1947595544",
"2159675211",
"2104706725",
"2131548207"
],
"abstract": [
"",
"This is an extensively revised and expanded second edition of the successful textbook on social network analysis integrating theory, applications, and network analysis using Pajek. The main structural concepts and their applications in social research are introduced with exercises. Pajek software and data sets are available so readers can learn network analysis through application and case studies. Readers will have the knowledge, skill, and tools to apply social network analysis across the social sciences, from anthropology and sociology to business administration and history. This second edition has a new chapter on random network models, for example, scale-free and small-world networks and Monte Carlo simulation; discussion of multiple relations, islands, and matrix multiplication; new structural indices such as eigenvector centrality, degree distribution, and clustering coefficients; new visualization options that include circular layout for partitions and drawing a network geographically as a 3D surface; and using Unicode labels. This new edition also includes instructions on exporting data from Pajek to R software. It offers updated descriptions and screen shots for working with Pajek (version 2.03).",
"Cytoscape is an open source software project for integrating biomolecular interaction networks with high-throughput expression data and other molecular states into a unified conceptual framework. Although applicable to any system of molecular components and interactions, Cytoscape is most powerful when used in conjunction with large databases of protein-protein, protein-DNA, and genetic interactions that are increasingly available for humans and model organisms. Cytoscape's software Core provides basic functionality to layout and query the network; to visually integrate the network with expression profiles, phenotypes, and other molecular states; and to link the network to databases of functional annotations. The Core is extensible through a straightforward plug-in architecture, allowing rapid development of additional computational analyses and features. Several case studies of Cytoscape plug-ins are surveyed, including a search for interaction pathways correlating with changes in gene expression, a study of protein complexes involved in cellular recovery to DNA damage, inference of a combined physical functional interaction network for Halobacterium, and an interface to detailed stochastic kinetic gene regulatory models.",
"Graph drawing is a basic visualization tool that works well for graphs having up to hundreds of nodes and edges. At greater scale, data density and occlusion problems often negate its effectiveness. Conventional pan-and-zoom, multiscale, and geometric fisheye views are not fully satisfactory solutions to this problem. As an alternative, we propose a topological zooming method. It precomputes a hierarchy of coarsened graphs that are combined on-the-fly into renderings, with the level of detail dependent on distance from one or more foci. A related geometric distortion method yields constant information density displays from these renderings.",
"We describe ASK-GraphView, a node-link-based graph visualization system that allows clustering and interactive navigation of large graphs, ranging in size up to 16 million edges. The system uses a scalable architecture and a series of increasingly sophisticated clustering algorithms to construct a hierarchy on an arbitrary, weighted undirected input graph. By lowering the interactivity requirements we can scale to substantially bigger graphs. The user is allowed to navigate this hierarchy in a top down manner by interactively expanding individual clusters. ASK-GraphView also provides facilities for filtering and coloring, annotation and cluster labeling"
]
} |
1706.04508 | 2626250971 | Videos are inherently multimodal. This paper studies the problem of how to fully exploit the abundant multimodal clues for improved video categorization. We introduce a hybrid deep learning framework that integrates useful clues from multiple modalities, including static spatial appearance information, motion patterns within a short time window, audio information as well as long-range temporal dynamics. More specifically, we utilize three Convolutional Neural Networks (CNNs) operating on appearance, motion and audio signals to extract their corresponding features. We then employ a feature fusion network to derive a unified representation with an aim to capture the relationships among features. Furthermore, to exploit the long-range temporal dynamics in videos, we apply two Long Short Term Memory networks with extracted appearance and motion features as inputs. Finally, we also propose to refine the prediction scores by leveraging contextual relationships among video semantics. The hybrid deep learning framework is able to exploit a comprehensive set of multimodal features for video classification. Through an extensive set of experiments, we demonstrate that (1) LSTM networks which model sequences in an explicitly recurrent manner are highly complementary with CNN models; (2) the feature fusion network which produces a fused representation through modeling feature relationships outperforms alternative fusion strategies; (3) the semantic context of video classes can help further refine the predictions for improved performance. Experimental results on two challenging benchmarks, the UCF-101 and the Columbia Consumer Videos (CCV), provide strong quantitative evidence that our framework achieves promising results: @math on the UCF-101 and @math on the CCV, outperforming competing methods with clear margins. | Different from hand-crafted features, recent advances on CNNs in image @cite_1 @cite_10 and speech domain @cite_14 have encouraged works to learn features directly from raw video data. The most straightforward way to utilize CNN on video data is stacking frames as inputs with an aim to learn spatial-temporal features using 3D convolutions @cite_52 @cite_54 @cite_55 . However, these works demonstrate worse performance than state-of-the-art trajectory features @cite_53 . This might result from the difficulty to learn 3D features with insufficient training data. To effectively model 3D signals, Simonyan al proposed to utilize two independent CNNs to capture spatial and motion information operating on RGB frames and stacked optical flow images, separately. Based on this approach, Wang al proposed to learn the transformation between two states triggered by actions @cite_15 . Feichtenhofer al experimented with different fusion approach to combine spatial and temporal features @cite_2 . During the training process of CNNs, the temporal order of frames and stacked optical flow images is discarded and thus the temporal structures of videos are ignored. | {
"cite_N": [
"@cite_14",
"@cite_54",
"@cite_55",
"@cite_1",
"@cite_52",
"@cite_53",
"@cite_2",
"@cite_15",
"@cite_10"
],
"mid": [
"2950689855",
"2308045930",
"2122476475",
"",
"1983364832",
"2105101328",
"2342662179",
"",
"2102605133"
],
"abstract": [
"Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.",
"",
"",
"",
"We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.",
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.",
"",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
]
} |
1706.04508 | 2626250971 | Videos are inherently multimodal. This paper studies the problem of how to fully exploit the abundant multimodal clues for improved video categorization. We introduce a hybrid deep learning framework that integrates useful clues from multiple modalities, including static spatial appearance information, motion patterns within a short time window, audio information as well as long-range temporal dynamics. More specifically, we utilize three Convolutional Neural Networks (CNNs) operating on appearance, motion and audio signals to extract their corresponding features. We then employ a feature fusion network to derive a unified representation with an aim to capture the relationships among features. Furthermore, to exploit the long-range temporal dynamics in videos, we apply two Long Short Term Memory networks with extracted appearance and motion features as inputs. Finally, we also propose to refine the prediction scores by leveraging contextual relationships among video semantics. The hybrid deep learning framework is able to exploit a comprehensive set of multimodal features for video classification. Through an extensive set of experiments, we demonstrate that (1) LSTM networks which model sequences in an explicitly recurrent manner are highly complementary with CNN models; (2) the feature fusion network which produces a fused representation through modeling feature relationships outperforms alternative fusion strategies; (3) the semantic context of video classes can help further refine the predictions for improved performance. Experimental results on two challenging benchmarks, the UCF-101 and the Columbia Consumer Videos (CCV), provide strong quantitative evidence that our framework achieves promising results: @math on the UCF-101 and @math on the CCV, outperforming competing methods with clear margins. | Many works resort to LSTM to capture temporal dynamics in videos due to its great success in sequential modeling tasks like speech recognition @cite_14 and video captioning @cite_31 . Srivastava al proposed to learn video features using an auto-encoder framework @cite_5 based on LSTMs. Donahue al utilized two LSTM models using spatial and motion features extracted from CNN models @cite_48 . Ng al further deepened LSTM to five layers and experimented with several pooling strategies @cite_39 . Our work leverages LSTMs for temporal modeling to explicitly complement the limitation of the frame-based CNN models. | {
"cite_N": [
"@cite_14",
"@cite_48",
"@cite_39",
"@cite_5",
"@cite_31"
],
"mid": [
"2950689855",
"2951183276",
"",
"2952453038",
"2949888546"
],
"abstract": [
"Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"",
"We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier."
]
} |
1706.04508 | 2626250971 | Videos are inherently multimodal. This paper studies the problem of how to fully exploit the abundant multimodal clues for improved video categorization. We introduce a hybrid deep learning framework that integrates useful clues from multiple modalities, including static spatial appearance information, motion patterns within a short time window, audio information as well as long-range temporal dynamics. More specifically, we utilize three Convolutional Neural Networks (CNNs) operating on appearance, motion and audio signals to extract their corresponding features. We then employ a feature fusion network to derive a unified representation with an aim to capture the relationships among features. Furthermore, to exploit the long-range temporal dynamics in videos, we apply two Long Short Term Memory networks with extracted appearance and motion features as inputs. Finally, we also propose to refine the prediction scores by leveraging contextual relationships among video semantics. The hybrid deep learning framework is able to exploit a comprehensive set of multimodal features for video classification. Through an extensive set of experiments, we demonstrate that (1) LSTM networks which model sequences in an explicitly recurrent manner are highly complementary with CNN models; (2) the feature fusion network which produces a fused representation through modeling feature relationships outperforms alternative fusion strategies; (3) the semantic context of video classes can help further refine the predictions for improved performance. Experimental results on two challenging benchmarks, the UCF-101 and the Columbia Consumer Videos (CCV), provide strong quantitative evidence that our framework achieves promising results: @math on the UCF-101 and @math on the CCV, outperforming competing methods with clear margins. | As aforementioned, the co-occurrence of video semantics, serving as context, can provide useful information. For example, Rabinovich al proposed to incorporate the semantics context information with a CRF model @cite_43 . Jiang al modeled the class relationships with a semantic diffusion algorithm @cite_26 . Deng al leveraged a graphical model to encode label hierarchies for improved image classification performance @cite_19 . Wu al proposed to capture the relationships of video semantics by regularizing the classification process @cite_8 . Chen al utilized confusion matrix to predict the context of a category when training CNNs @cite_50 . In our paper, we propose to utilize confusion matrix as contextual relationships derived from trained models, to refine the prediction scores as a post-processing step. Therefore, the recognition of a class of interest can benefit from its related classes. | {
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_19",
"@cite_43",
"@cite_50"
],
"mid": [
"2533506101",
"2069682406",
"2108598243",
"2081293863",
"2950931866"
],
"abstract": [
"Learning to cope with domain change has been known as a challenging problem in many real-world applications. This paper proposes a novel and efficient approach, named domain adaptive semantic diffusion (DASD), to exploit semantic context while considering the domain-shift-of-context for large scale video concept annotation. Starting with a large set of concept detectors, the proposed DASD refines the initial annotation results using graph diffusion technique, which preserves the consistency and smoothness of the annotation over a semantic graph. Different from the existing graph learning methods which capture relations among data samples, the semantic graph treats concepts as nodes and the concept affinities as the weights of edges. Particularly, the DASD approach is capable of simultaneously improving the annotation results and adapting the concept affinities to new test data. The adaptation provides a means to handle domain change between training and test data, which occurs very often in video annotation task. We conduct extensive experiments to improve annotation results of 374 concepts over 340 hours of videos from TRECVID 2005-2007 data sets. Results show consistent and significant performance gain over various baselines. In addition, the proposed approach is very efficient, completing DASD over 374 concepts within just 2 milliseconds for each video shot on a regular PC.",
"Videos contain very rich semantics and are intrinsically multimodal. In this paper, we study the challenging task of classifying videos according to their high-level semantics such as human actions or complex events. Although extensive efforts have been paid to study this problem, most existing works combined multiple features using simple fusion strategies and neglected the exploration of inter-class semantic relationships. In this paper, we propose a novel unified framework that jointly learns feature relationships and exploits the class relationships for improved video classification performance. Specifically, these two types of relationships are learned and utilized by rigorously imposing regularizations in a deep neural network (DNN). Such a regularized DNN can be efficiently launched using a GPU implementation with an affordable training cost. Through arming the DNN with better capability of exploring both the inter-feature and the inter-class relationships, the proposed regularized DNN is more suitable for identifying video semantics. With extensive experimental evaluations, we demonstrate that the proposed framework exhibits superior performance over several state-of-the-art approaches. On the well-known Hollywood2 and Columbia Consumer Video benchmarks, we obtain to-date the best reported results: 65.7 and 70.6 respectively in terms of mean average precision.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"In the task of visual object categorization, semantic context can play the very important role of reducing ambiguity in objects' visual appearance. In this work we propose to incorporate semantic object context as a post-processing step into any off-the-shelf object categorization model. Using a conditional random field (CRF) framework, our approach maximizes object label agreement according to contextual relevance. We compare two sources of context: one learned from training data and another queried from Google Sets. The overall performance of the proposed framework is evaluated on the PASCAL and MSRC datasets. Our findings conclude that incorporating context into object categorization greatly improves categorization accuracy.",
"We present an approach to utilize large amounts of web data for learning CNNs. Specifically inspired by curriculum learning, we present a two-step approach for CNN training. First, we use easy images to train an initial visual representation. We then use this initial CNN and adapt it to harder, more realistic images by leveraging the structure of data and categories. We demonstrate that our two-stage CNN outperforms a fine-tuned CNN trained on ImageNet on Pascal VOC 2012. We also demonstrate the strength of webly supervised learning by localizing objects in web images and training a R-CNN style detector. It achieves the best performance on VOC 2007 where no VOC training data is used. Finally, we show our approach is quite robust to noise and performs comparably even when we use image search results from March 2013 (pre-CNN image search era)."
]
} |
1706.04404 | 2626566727 | Abstract The usage of process choreographies and decentralized Business Process Management Systems has been named as an alternative to centralized business process orchestration. In choreographies, control over a process instance is shared between independent parties, and no party has full control or knowledge during process runtime. Nevertheless, it is necessary to monitor and verify process instances during runtime for purposes of documentation, accounting, or compensation. To achieve business process runtime verification, this work explores the suitability of the Bitcoin blockchain to create a novel solution for choreographies. The resulting approach is realized in a fully-functional software prototype. This software solution is evaluated in a qualitative comparison. Findings show that our blockchain-based approach enables a seamless execution monitoring and verification of choreographies, while at the same time preserving anonymity and independence of the process participants. Furthermore, the prototype is evaluated in a performance analysis. | The monitoring of choreographies has been investigated from different angles. argue that as a foundation for choreography monitoring, it is first necessary to negotiate a contract between the parties involved @cite_14 . For this, a markup language is provided which supports the exchange of choreography contracts. This language allows to define what monitoring information should be provided by which choreography participant and how this information is shared and accessed. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2120883746"
],
"abstract": [
"Meaningfully automating sociotechnical business collaboration promises efficiency-, effectiveness-, and quality increases for realizing next-generation decentralized autonomous organizations. For automating business-process aware cross-organizational operations, the development of existing choreography languages is technology driven and focuses less on sociotechnical suitability and expressiveness concepts and properties that recognize the interaction between people in organizations and technology in workplaces. This gap our suitability- and expressiveness exploration fills by means of a cross-organizational collaboration ontology that we map as a proof-of-concept evaluation to the eSourcing Markup Language (eSML). The latter we test in a feasibility case study to meaningfully support the automation of business collaboration. The developed eSourcing ontology and eSML is replicable for exploring strengths and weaknesses of other choreography languages."
]
} |
1706.04404 | 2626566727 | Abstract The usage of process choreographies and decentralized Business Process Management Systems has been named as an alternative to centralized business process orchestration. In choreographies, control over a process instance is shared between independent parties, and no party has full control or knowledge during process runtime. Nevertheless, it is necessary to monitor and verify process instances during runtime for purposes of documentation, accounting, or compensation. To achieve business process runtime verification, this work explores the suitability of the Bitcoin blockchain to create a novel solution for choreographies. The resulting approach is realized in a fully-functional software prototype. This software solution is evaluated in a qualitative comparison. Findings show that our blockchain-based approach enables a seamless execution monitoring and verification of choreographies, while at the same time preserving anonymity and independence of the process participants. Furthermore, the prototype is evaluated in a performance analysis. | In addition to contractual definitions, monitoring in decentralized process execution needs to be defined during choreography modeling. provide an extension to the standard BPMN 2.0 monitoring injection points to support choreographies @cite_11 . also propose a technology-driven approach to enable choreography monitoring by extending BPEL4Chor with an event-oriented monitoring agreement @cite_33 . To address privacy concerns of process participants, events can only be defined based on a publicly available process model. How each participant maps public choreography activities to internal processes remains hidden. None of the approaches discussed so far makes use of a decentralized ledger to store and distribute monitoring data in a trustworthy way. Notably, if this is supported at all, it is necessary to explicitly define which data is shared with whom. | {
"cite_N": [
"@cite_33",
"@cite_11"
],
"mid": [
"2073201241",
"2295148775"
],
"abstract": [
"Business process monitoring in the area of service oriented computing is typically performed using business activity monitoring technology in an intra-organizational setting. Due to outsourcing and the increasing need for companies to work together to meet their joint customer demands, there is a need for monitoring of business processes across organizational boundaries. Thereby, partners in a choreography have to exchange monitoring data, in order to enable process tracking and evaluation of process metrics. In this paper, we describe an event-based monitoring approach based on BPEL4Chor service choreography descriptions. We show how to define monitoring agreements specifying events each partner in the choreography has to provide. We distinguish between resource events and complex events for calculation of process metrics using complex event processing technology. We present our implementation and evaluate the concepts based on a scenario.",
"Software-as-a-service (SaaS) providers are further expanding their offering by growing into the space of business process outsourcing (BPO). Therefore, the SaaS provider wants to administer and manage the business process steps according to a service level agreement. Outsourcing of business processes results in decentralized business workflows. However, current business process modeling languages, e.g. Business Process Execution Language (BPEL), Business Process Model and Notation (BPMN), are based highly on a centralized execution model and current BPMN engines offer limited constructs for federation and decentralized execution. To guarantee execution of business processes according to a service level agreement, different parties involved in a federated workflow must be able to inspect the state of external workflows. This requires advanced inspection interfaces and monitoring facilities. Current business process modeling languages must thus be extended to support monitoring in the s pecification, support modeling and support deployment of decentralized workflows. In this paper, correlation and monitoring extensions for BPMN are described. These extensions to BPMN are described such that the existing specification can still be used as is in a backwards compatible way."
]
} |
1706.04404 | 2626566727 | Abstract The usage of process choreographies and decentralized Business Process Management Systems has been named as an alternative to centralized business process orchestration. In choreographies, control over a process instance is shared between independent parties, and no party has full control or knowledge during process runtime. Nevertheless, it is necessary to monitor and verify process instances during runtime for purposes of documentation, accounting, or compensation. To achieve business process runtime verification, this work explores the suitability of the Bitcoin blockchain to create a novel solution for choreographies. The resulting approach is realized in a fully-functional software prototype. This software solution is evaluated in a qualitative comparison. Findings show that our blockchain-based approach enables a seamless execution monitoring and verification of choreographies, while at the same time preserving anonymity and independence of the process participants. Furthermore, the prototype is evaluated in a performance analysis. | With regard to runtime verification of choreography executions, two general approaches have been proposed: von Riegen and Ritter @cite_18 propose the usage of multiple (ESBs) to handle all communication between the cooperating parties. The authors suggest the usage of proxies which intercept all communication and log all necessary information in a central component. A similar approach is described by @cite_23 . In their scenario, the cooperating participants of a choreography are already chosen at deployment time by the process owner. To guarantee the enforcement of given policies, all participants must run the same communication gateways which intercept all traffic. If any deviations are observed, events are emitted to notify the process owner. | {
"cite_N": [
"@cite_18",
"@cite_23"
],
"mid": [
"2131386763",
"2152519978"
],
"abstract": [
"Cross-organizational business processes gain more and more attention in the scientific community. For modeling these processes, choreographies are an adequate solution to describe the observable behavior for each business participant and the corresponding protocols between them. Nevertheless, current approaches mainly focus on modeling and conformance checking before runtime. In contrast, this contribution concentrates on the runtime validation of choreographies and will thus propose a monitoring infrastructure that supports this validation. We will sketch the requirements for a monitoring infrastructure as well as the solutions in order to meet these requirements. Without such a reliable monitoring framework, a runtime validation framework simply would not be able to correctly validate the interactions.",
"In to days economy, collaborative computing grows in importance. Inter-organizational service-based processes are increasingly adopted by different companies when they cannot achieve goals on their own. As a result, conformance problems arise and it must be ensured that the integrity of processes execution remains guaranteed. In this paper, we propose new components, to be deployed along the boundaries of each participating organization, offering external flow control, and notification in case of violation detection, while providing process execution traceability. To achieve our goals, we propose an event-based approach in which inter-organizational exchanges are perceived as events. We define event patterns for filtering the desirable incoming and outgoing messages."
]
} |
1706.04404 | 2626566727 | Abstract The usage of process choreographies and decentralized Business Process Management Systems has been named as an alternative to centralized business process orchestration. In choreographies, control over a process instance is shared between independent parties, and no party has full control or knowledge during process runtime. Nevertheless, it is necessary to monitor and verify process instances during runtime for purposes of documentation, accounting, or compensation. To achieve business process runtime verification, this work explores the suitability of the Bitcoin blockchain to create a novel solution for choreographies. The resulting approach is realized in a fully-functional software prototype. This software solution is evaluated in a qualitative comparison. Findings show that our blockchain-based approach enables a seamless execution monitoring and verification of choreographies, while at the same time preserving anonymity and independence of the process participants. Furthermore, the prototype is evaluated in a performance analysis. | To avoid token loss, choreography runtime implementations for BPMS like the ones presented by @cite_22 and @cite_8 save the tokens in a shared storage. This shared storage then becomes the controlling entity for the system. The shared storage must be operated by a trusted third party, which does not fit the basic assumption of no centralized source of trust in choreography-based BPM. | {
"cite_N": [
"@cite_22",
"@cite_8"
],
"mid": [
"2134606796",
"1991338843"
],
"abstract": [
"Web service orchestrations-expressed in the Web service business process execution language (WS-BPEL or BPEL for short)-are a manifestation of the two-level-programming paradigm where services, i.e. the business functions used by the composite application, are composed through BPEL's control flow constructs. BPEL processes Web service orchestrations, business functions therefore can be transparently accessed remotely, allowing to build composite applications that integrate business functions provided by different partners on different locations. As of today, execution of BPEL processes, i.e. the evaluation of the processes' control flow, is performed by a central workflow engine. In certain scenarios, such as complex collaborative cross-partner interactions, this approach of centralized workflow enactment leads to \"un-natural\" process models; process models that are not driven by the processes' original business goal but by infrastructural or organizational reasons. In this paper, we propose an alternative approach to enacting BPEL process control flow in a distributed, decentralized manner. We present the overall process lifecycle and give a detailed description of the underlying process model.",
"Establishing scalable and cross-enterprise workflow management systems (WfMSs) in the cloud requires the adaptation and extension of existing concepts for process management. This paper proposes a scalable and cross-enterprise WfMS with a multitenancy architecture. Especially, it can activate enactment of workflow processes by cloud collaboration. We do not employ the traditional engine-based WfMSs. The key idea is to have the workflow process instance to be self-protected and does not need a workflow engine to secure the data therein. Thus, the process instance discovery and activity execution can be fully independently and distributed. As a result, we can employ the data storage system, Big Table, to store the process instances, which may form a big data. The applying of element-wise encryption and chained digital signature makes it satisfy major security requirements of authentication, confidentiality, data integrity, and nonrepudiation."
]
} |
1706.04404 | 2626566727 | Abstract The usage of process choreographies and decentralized Business Process Management Systems has been named as an alternative to centralized business process orchestration. In choreographies, control over a process instance is shared between independent parties, and no party has full control or knowledge during process runtime. Nevertheless, it is necessary to monitor and verify process instances during runtime for purposes of documentation, accounting, or compensation. To achieve business process runtime verification, this work explores the suitability of the Bitcoin blockchain to create a novel solution for choreographies. The resulting approach is realized in a fully-functional software prototype. This software solution is evaluated in a qualitative comparison. Findings show that our blockchain-based approach enables a seamless execution monitoring and verification of choreographies, while at the same time preserving anonymity and independence of the process participants. Furthermore, the prototype is evaluated in a performance analysis. | As described in the previous sections, we solved the problems of these approaches by using the Bitcoin blockchain as the trusted entity for the choreography. In general, the usage of blockchain in BPM is still at its very beginning @cite_13 . Nevertheless, through its design, the blockchain can provide a shared trust basis which is not under the control of a single organization. Messages can be exchanged directly within blockchain transactions and token information can be stored in the blockchain by embedding them in transactions. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2165711164"
],
"abstract": [
"With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them.In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management. Survey of state of the art in infrastructural challenges for elastic BPM.Scheduling, resource allocation, process monitoring, decentralized coordination and state management for elastic processes are discussed in detail.Identification of future research directions."
]
} |
1706.04404 | 2626566727 | Abstract The usage of process choreographies and decentralized Business Process Management Systems has been named as an alternative to centralized business process orchestration. In choreographies, control over a process instance is shared between independent parties, and no party has full control or knowledge during process runtime. Nevertheless, it is necessary to monitor and verify process instances during runtime for purposes of documentation, accounting, or compensation. To achieve business process runtime verification, this work explores the suitability of the Bitcoin blockchain to create a novel solution for choreographies. The resulting approach is realized in a fully-functional software prototype. This software solution is evaluated in a qualitative comparison. Findings show that our blockchain-based approach enables a seamless execution monitoring and verification of choreographies, while at the same time preserving anonymity and independence of the process participants. Furthermore, the prototype is evaluated in a performance analysis. | One particular issue of Bitcoin is however its limited scalability. The Bitcoin project struggels to provide the transaction throughput required for the current demand. To increase the blockchain throughput, either the block size or the block creation frequency must be increased. Both factors influence the network's capability to synchronize in time @cite_40 . This would result in an increase in conflicting blocks @cite_7 and a reduced security level. While different solutions for this issue have been researched, e.g., @cite_15 , none has been integrated yet into Bitcoin. | {
"cite_N": [
"@cite_40",
"@cite_7",
"@cite_15"
],
"mid": [
"2057248704",
"87727730",
"2295940006"
],
"abstract": [
"Bitcoin is a digital currency that unlike traditional currencies does not rely on a centralized authority. Instead Bitcoin relies on a network of volunteers that collectively implement a replicated ledger and verify transactions. In this paper we analyze how Bitcoin uses a multi-hop broadcast to propagate transactions and blocks through the network to update the ledger replicas. We then use the gathered information to verify the conjecture that the propagation delay in the network is the primary cause for blockchain forks. Blockchain forks should be avoided as they are symptomatic for inconsistencies among the replicas in the network. We then show what can be achieved by pushing the current protocol to its limit with unilateral changes to the client's behavior.",
"The Bitcoin virtual currency is built on the top of a decentralized peer-to-peer (P2P) network used to propagate system information such as transactions or blockchain updates. In this paper, we have performed a data collection process identifying more than 872000 different Bitcoin nodes. This data allows us to present information on the size of the Bitcoin P2P network, the node geographic distribution, the network stability in terms of interrupted availability of nodes, as well as some data regarding the propagation time of the transmitted information. Furthermore, although not every Bitcoin user can be identified as a P2P network node, measurements of the P2P network can be considered as a lower bound for Bitcoin usage, and they provide interesting results on the adoption of such virtual currency.",
"Bitcoin is a disruptive new crypto-currency based on a decentralized open-source protocol which has been gradually gaining momentum. Perhaps the most important question that will affect Bitcoin’s success, is whether or not it will be able to scale to support the high volume of transactions required from a global currency system. We investigate the implications of having a higher transaction throughput on Bitcoin’s security against double-spend attacks. We show that at high throughput, substantially weaker attackers are able to reverse payments they have made, even well after they were considered accepted by recipients. We address this security concern through the GHOST rule, a modification to the way Bitcoin nodes construct and re-organize the block chain, Bitcoin’s core distributed data-structure. GHOST has been adopted and a variant of it has been implemented as part of the Ethereum project, a second generation distributed applications platform."
]
} |
1706.04266 | 2626925008 | Similarity join, which can find similar objects (e.g., products, names, addresses) across different sources, is powerful in dealing with variety in big data, especially web data. Threshold-driven similarity join, which has been extensively studied in the past, assumes that a user is able to specify a similarity threshold, and then focuses on how to efficiently return the object pairs whose similarities pass the threshold. We argue that the assumption about a well set similarity threshold may not be valid for two reasons. The optimal thresholds for different similarity join tasks may vary a lot. Moreover, the end-to-end time spent on similarity join is likely to be dominated by a back-and-forth threshold-tuning process. In response, we propose preference-driven similarity join. The key idea is to provide several result set preferences, rather than a range of thresholds, for a user to choose from. Intuitively, a result set preference can be considered as an objective function to capture a user's preference on a similarity join result. Once a preference is chosen, we automatically compute the similarity join result optimizing the preference objective. As the proof of concept, we devise two useful preferences and propose a novel preference-driven similarity join framework coupled with effective optimization techniques. Our approaches are evaluated on four real-world web datasets from a diverse range of application scenarios. The experiments show that preference-driven similarity join can achieve high-quality results without a tedious threshold-tuning process. | Due to the crucial role of similarity join in data integration and data cleaning, a large number of similarity join algorithms have been proposed @cite_2 @cite_0 @cite_7 @cite_1 @cite_4 @cite_30 @cite_12 @cite_8 @cite_32 @cite_5 . There are also scalable implementations of the algorithms using the MapReduce framework @cite_27 @cite_17 @cite_6 . Top-k similarity join is also explored @cite_30 @cite_21 @cite_31 . | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_12",
"@cite_17"
],
"mid": [
"2105436061",
"2150916025",
"2097776316",
"1973001156",
"646176396",
"2097184821",
"2121269638",
"2001700730",
"2121516976",
"2151930506",
"2096598900",
"2166400748",
"1979666709",
"2115214414",
"2104599107"
],
"abstract": [
"Similarity join is a useful primitive operation underlying many applications, such as near duplicate Web page detection, data integration, and pattern recognition. Traditional similarity joins require a user to specify a similarity threshold. In this paper, we study a variant of the similarity join, termed top-k set similarity join. It returns the top-k pairs of records ranked by their similarities, thus eliminating the guess work users have to perform when the similarity threshold is unknown before hand. An algorithm, topk-join, is proposed to answer top-k similarity join efficiently. It is based on the prefix filtering principle and employs tight upper bounding of similarity values of unseen pairs. Experimental results demonstrate the efficiency of the proposed algorithm on large-scale real datasets.",
"There has been considerable interest in similarity join in the research community recently. Similarity join is a fundamental operation in many application areas, such as data integration and cleaning, bioinformatics, and pattern recognition. We focus on efficient algorithms for similarity join with edit distance constraints. Existing approaches are mainly based on converting the edit distance constraint to a weaker constraint on the number of matching q-grams between pair of strings. In this paper, we propose the novel perspective of investigating mismatching q-grams. Technically, we derive two new edit distance lower bounds by analyzing the locations and contents of mismatching q-grams. A new algorithm, Ed-Join, is proposed that exploits the new mismatch-based filtering methods; it achieves substantial reduction of the candidate sizes and hence saves computation time. We demonstrate experimentally that the new algorithm outperforms alternative methods on large-scale real datasets under a wide range of parameter settings.",
"Given a large collection of sparse vector data in a high dimensional space, we investigate the problem of finding all pairs of vectors whose similarity score (as determined by a function such as cosine distance) is above a given threshold. We propose a simple algorithm based on novel indexing and optimization strategies that solves this problem without relying on approximation methods or extensive parameter tuning. We show the approach efficiently handles a variety of datasets across a wide setting of similarity thresholds, with large speedups over previous state-of-the-art approaches.",
"As an essential operation in data cleaning, the similarity join has attracted considerable attention from the database community. In this paper, we study string similarity joins with edit-distance constraints, which find similar string pairs from two large sets of strings whose edit distance is within a given threshold. Existing algorithms are efficient either for short strings or for long strings, and there is no algorithm that can efficiently and adaptively support both short strings and long strings. To address this problem, we propose a partition-based method called Pass-Join. Pass-Join partitions a string into a set of segments and creates inverted indices for the segments. Then for each string, Pass-Join selects some of its substrings and uses the selected substrings to find candidate pairs using the inverted indices. We devise efficient techniques to select the substrings and prove that our method can minimize the number of selected substrings. We develop novel pruning techniques to efficiently verify the candidate pairs. Experimental results show that our algorithms are efficient for both short strings and long strings, and outperform state-of-the-art methods on real datasets.",
"In this paper, we study a novel problem of continuous similarity search for evolving queries. Given a set of objects, each being a set or multiset of items, and a data stream, we want to continuously maintain the top-k most similar objects using the last n items in the stream as an evolving query. We show that the problem has several important applications. At the same time, the problem is challenging. We develop a filtering-based method and a hashing-based method. Our experimental results on both real data sets and synthetic data sets show that our methods are effective and efficient.",
"With the increasing amount of data and the need to integrate data from multiple data sources, a challenging issue is to find near duplicate records efficiently. In this paper, we focus on efficient algorithms to find pairs of records such that their similarities are above a given threshold. Several existing algorithms rely on the prefix filtering principle to avoid computing similarity values for all possible pairs of records. We propose new filtering techniques by exploiting the ordering information; they are integrated into the existing methods and drastically reduce the candidate sizes and hence improve the efficiency. Experimental results show that our proposed algorithms can achieve up to 2.6x - 5x speed-up over previous algorithms on several real datasets and provide alternative solutions to the near duplicate Web page detection problem.",
"As two important operations in data cleaning, similarity join and similarity search have attracted much attention recently. Existing methods to support similarity join usually adopt a prefix-filtering-based framework. They select a prefix of each object and prune object pairs whose prefixes have no overlap. We have an observation that prefix lengths have significant effect on the performance. Different prefix lengths lead to significantly different performance, and prefix filtering does not always achieve high performance. To address this problem, in this paper we propose an adaptive framework to support similarity join. We propose a cost model to judiciously select an appropriate prefix for each object. To efficiently select prefixes, we devise effective indexes. We extend our method to support similarity search. Experimental results show that our framework beats the prefix-filtering-based framework and achieves high efficiency.",
"String similarity join is an essential operation in data integration. The era of big data calls for scalable algorithms to support large-scale string similarity joins. In this paper, we study scalable string similarity joins using MapReduce. We propose a MapReduce-based framework, called MASSJOIN, which supports both set-based similarity functions and character-based similarity functions. We extend the existing partition-based signature scheme to support set-based similarity functions. We utilize the signatures to generate key-value pairs. To reduce the transmission cost, we merge key-value pairs to significantly reduce the number of key-value pairs, from cubic to linear complexity, while not sacrificing the pruning power. To improve the performance, we incorporate “light-weight” filter units into the key-value pairs which can be utilized to prune large number of dissimilar pairs without significantly increasing the transmission cost. Experimental results on real-world datasets show that our method significantly outperformed state-of-the-art approaches.",
"Data cleaning based on similarities involves identification of \"close\" tuples, where closeness is evaluated using a variety of similarity functions chosen to suit the domain and application. Current approaches for efficiently implementing such similarity joins are tightly tied to the chosen similarity function. In this paper, we propose a new primitive operator which can be used as a foundation to implement similarity joins according to a variety of popular string similarity functions, and notions of similarity which go beyond textual similarity. We then propose efficient implementations for this operator. In an experimental evaluation using real datasets, we show that the implementation of similarity joins using our operator is comparable to, and often substantially better than, previous customized implementations for particular similarity functions.",
"In this paper we study how to efficiently perform set-similarity joins in parallel using the popular MapReduce framework. We propose a 3-stage approach for end-to-end set-similarity joins. We take as input a set of records and output a set of joined records based on a set-similarity condition. We efficiently partition the data across nodes in order to balance the workload and minimize the need for replication. We study both self-join and R-S join cases, and show how to carefully control the amount of data kept in main memory on each node. We also propose solutions for the case where, even if we use the most fine-grained partitioning, the data still does not fit in the main memory of a node. We report results from extensive experiments on real datasets, synthetically increased in size, to evaluate the speedup and scaleup properties of the proposed algorithms using Hadoop.",
"Given two input collections of sets, a set-similarity join (SSJoin) identifies all pairs of sets, one from each collection, that have high similarity. Recent work has identified SSJoin as a useful primitive operator in data cleaning. In this paper, we propose new algorithms for SSJoin. Our algorithms have two important features: They are exact, i.e., they always produce the correct answer, and they carry precise performance guarantees. We believe our algorithms are the first to have both features; previous algorithms with performance guarantees are only probabilistically approximate. We demonstrate the effectiveness of our algorithms using a thorough experimental evaluation over real-life and synthetic data sets.",
"String similarity join is an important operation in data integration and cleansing that finds similar string pairs from two collections of strings. More than ten algorithms have been proposed to address this problem in the recent two decades. However, existing algorithms have not been thoroughly compared under the same experimental framework. For example, some algorithms are tested only on specific datasets. This makes it rather difficult for practitioners to decide which algorithms should be used for various scenarios. To address this problem, in this paper we provide a comprehensive survey on a wide spectrum of existing string similarity join algorithms, classify them into different categories based on their main techniques, and compare them through extensive experiments on a variety of real-world datasets with different characteristics. We also report comprehensive findings obtained from the experiments and provide new insights about the strengths and weaknesses of existing similarity join algorithms which can guide practitioners to select appropriate algorithms for various scenarios.",
"There is a wide range of applications that require finding the top-k most similar pairs of records in a given database. However, computing such top-k similarity joins is a challenging problem today, as there is an increasing trend of applications that expect to deal with vast amounts of data. For such data-intensive applications, parallel executions of programs on a large cluster of commodity machines using the MapReduce paradigm have recently received a lot of attention. In this paper, we investigate how the top-k similarity join algorithms can get benefits from the popular MapReduce framework. We first develop the divide-and-conquer and branch-and-bound algorithms. We next propose the all pair partitioning and essential pair partitioning methods to minimize the amount of data transfers between map and reduce functions. We finally perform the experiments with not only synthetic but also real-life data sets. Our performance study confirms the effectiveness and scalability of our MapReduce algorithms.",
"A string similarity join finds similar pairs between two collections of strings. It is an essential operation in many applications, such as data integration and cleaning, and has attracted significant attention recently. In this paper, we study string similarity joins with edit-distance constraints. Existing methods usually employ a filter-and-refine framework and have the following disadvantages: (1) They are inefficient for the data sets with short strings (the average string length is no larger than 30); (2) They involve large indexes; (3) They are expensive to support dynamic update of data sets. To address these problems, we propose a novel framework called trie-join, which can generate results efficiently with small indexes. We use a trie structure to index the strings and utilize the trie structure to efficiently find the similar string pairs based on subtrie pruning. We devise efficient trie-join algorithms and pruning techniques to achieve high performance. Our method can be easily extended to support dynamic update of data sets efficiently. Experimental results show that our algorithms outperform state-of-the-art methods by an order of magnitude on three real data sets with short strings.",
"This work proposes V-SMART-Join, a scalable MapReduce-based framework for discovering all pairs of similar entities. The V-SMART-Join framework is applicable to sets, multisets, and vectors. V-SMART-Join is motivated by the observed skew in the underlying distributions of Internet traffic, and is a family of 2-stage algorithms, where the first stage computes and joins the partial results, and the second stage computes the similarity exactly for all candidate pairs. The V-SMART-Join algorithms are very efficient and scalable in the number of entities, as well as their cardinalities. They were up to 30 times faster than the state of the art algorithm, VCL, when compared on a real dataset of a small size. We also established the scalability of the proposed algorithms by running them on a dataset of a realistic size, on which VCL never succeeded to finish. Experiments were run using real datasets of IPs and cookies, where each IP is represented as a multiset of cookies, and the goal is to discover similar IPs to identify Internet proxies."
]
} |
1706.04266 | 2626925008 | Similarity join, which can find similar objects (e.g., products, names, addresses) across different sources, is powerful in dealing with variety in big data, especially web data. Threshold-driven similarity join, which has been extensively studied in the past, assumes that a user is able to specify a similarity threshold, and then focuses on how to efficiently return the object pairs whose similarities pass the threshold. We argue that the assumption about a well set similarity threshold may not be valid for two reasons. The optimal thresholds for different similarity join tasks may vary a lot. Moreover, the end-to-end time spent on similarity join is likely to be dominated by a back-and-forth threshold-tuning process. In response, we propose preference-driven similarity join. The key idea is to provide several result set preferences, rather than a range of thresholds, for a user to choose from. Intuitively, a result set preference can be considered as an objective function to capture a user's preference on a similarity join result. Once a preference is chosen, we automatically compute the similarity join result optimizing the preference objective. As the proof of concept, we devise two useful preferences and propose a novel preference-driven similarity join framework coupled with effective optimization techniques. Our approaches are evaluated on four real-world web datasets from a diverse range of application scenarios. The experiments show that preference-driven similarity join can achieve high-quality results without a tedious threshold-tuning process. | While the majority of the existing work on similarity join needs to specify a similarity threshold or a limit of the number of results returned, there do exist some studies that seek to find a suitable threshold for similarity join in a supervised way @cite_19 @cite_18 @cite_3 @cite_9 . Both @cite_3 and @cite_9 adopted active learning to tweak the threshold. Chaudhuri @cite_18 learned an operator tree, where each node contains a similarity threshold and a similarity function on each of the splitting attributes. Wang @cite_19 modeled this problem as an optimization problem and applied hill climbing to optimize the threshold-selection process. Nevertheless, all these methods need humans to label a number of pairs, which are selected based on either random sampling or active learning. In comparison, our method does not need any labeled data. | {
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_9",
"@cite_3"
],
"mid": [
"2114764731",
"2143124645",
"2028431844",
"2119320829"
],
"abstract": [
"Entity matching that finds records referring to the same entity is an important operation in data cleaning and integration. Existing studies usually use a given similarity function to quantify the similarity of records, and focus on devising index structures and algorithms for efficient entity matching. However it is a big challenge to define \"how similar is similar\" for real applications, since it is rather hard to automatically select appropriate similarity functions. In this paper we attempt to address this problem. As there are a large number of similarity functions, and even worse thresholds may have infinite values, it is rather expensive to find appropriate similarity functions and thresholds. Fortunately, we have an observation that different similarity functions and thresholds have redundancy, and we have an opportunity to prune inappropriate similarity functions. To this end, we propose effective optimization techniques to eliminate such redundancy, and devise efficient algorithms to find the best similarity functions. The experimental results on both real and synthetic datasets show that our method achieves high accuracy and outperforms the baseline algorithms.",
"Record matching is the task of identifying records that match the same real world entity. This is a problem of great significance for a variety of business intelligence applications. Implementations of record matching rely on exact as well as approximate string matching (e.g., edit distances) and use of external reference data sources. Record matching can be viewed as a query composed of a small set of primitive operators. However, formulating such record matching queries is difficult and depends on the specific application scenario. Specifically, the number of options both in terms of string matching operations as well as the choice of external sources can be daunting. In this paper, we exploit the availability of positive and negative examples to search through this space and suggest an initial record matching query. Such queries can be subsequently modified by the programmer as needed. We ensure that the record matching queries our approach produces are (1) efficient: these queries can be run on large datasets by leveraging operations that are well-supported by RDBMSs, and (2) explainable: the queries are easy to understand so that they may be modified by the programmer with relative ease. We demonstrate the effectiveness of our approach on several real-world datasets.",
"Identifying approximately identical strings is key for many data cleaning and data integration processes, including similarity join and record matching. The accuracy of such tasks crucially depends on appropriate choices of string similarity measures and thresholds for the particular dataset. Manual selection of similarity measures and thresholds is infeasible. Other approaches rely on the existence of adequate historic ground-truth or massive manual effort. To address this problem, we propose an Active Learning algorithm which selects a best performing similarity measure in a given set while optimizing a decision threshold. Active Learning minimizes the number of user queries needed to arrive at an appropriate classifier. Queries require only the label match no match, which end users can easily provide in their domain. Evaluation on well-known string matching benchmark data sets shows that our approach achieves highly accurate results with a small amount of manual labeling required.",
"We consider the problem of learning a record matching package (classifier) in an active learning setting. In active learning, the learning algorithm picks the set of examples to be labeled, unlike more traditional passive learning setting where a user selects the labeled examples. Active learning is important for record matching since manually identifying a suitable set of labeled examples is difficult. Previous algorithms that use active learning for record matching have serious limitations: The packages that they learn lack quality guarantees and the algorithms do not scale to large input sizes. We present new algorithms for this problem that overcome these limitations. Our algorithms are fundamentally different from traditional active learning approaches, and are designed ground up to exploit problem characteristics specific to record matching. We include a detailed experimental evaluation on realworld data demonstrating the effectiveness of our algorithms."
]
} |
1706.04266 | 2626925008 | Similarity join, which can find similar objects (e.g., products, names, addresses) across different sources, is powerful in dealing with variety in big data, especially web data. Threshold-driven similarity join, which has been extensively studied in the past, assumes that a user is able to specify a similarity threshold, and then focuses on how to efficiently return the object pairs whose similarities pass the threshold. We argue that the assumption about a well set similarity threshold may not be valid for two reasons. The optimal thresholds for different similarity join tasks may vary a lot. Moreover, the end-to-end time spent on similarity join is likely to be dominated by a back-and-forth threshold-tuning process. In response, we propose preference-driven similarity join. The key idea is to provide several result set preferences, rather than a range of thresholds, for a user to choose from. Intuitively, a result set preference can be considered as an objective function to capture a user's preference on a similarity join result. Once a preference is chosen, we automatically compute the similarity join result optimizing the preference objective. As the proof of concept, we devise two useful preferences and propose a novel preference-driven similarity join framework coupled with effective optimization techniques. Our approaches are evaluated on four real-world web datasets from a diverse range of application scenarios. The experiments show that preference-driven similarity join can achieve high-quality results without a tedious threshold-tuning process. | There are studies about preferences in the database research (See @cite_10 for a thorough survey.). However, they mainly focus on how the joined tuples are ranked and selected, instead of how the tables (and in our case sets of objects) are joined to generate the joined tuples. Alexe @cite_25 discussed user-defined preference rules for integrating temporal data, which is orthogonal to our work. To the best of our knowledge, @cite_29 is the only work discussing result set preference for joining relational tables. | {
"cite_N": [
"@cite_29",
"@cite_10",
"@cite_25"
],
"mid": [
"2770217102",
"2168976073",
"2296063924"
],
"abstract": [
"In many applications, such as data integration and big data analytics, one has to integrate data from multiple sources without detailed and accurate schema information. The state of the art focuses on matching attributes among sources based on the information derived from the data in those sources. However, a best join result according to a method's own pre-determined criteria may not fit a user's best interest. In this paper, we tackle the challenge from a novel angle and investigate how to join schemaless tables to meet a user preference the best. We identify a set of essential preferences that are useful in various scenarios, such as minimizing the number of tuples in outer join results and maximizing the entropy of the joining key's distribution. We also develop a systematic method to compute the best join predicate optimizing an objective function representing a user preference. We conduct extensive experiments on 4 large datasets and compare with 4 baselines from the state of the art of schema matching and attribute clustering. The experimental results clearly show that our algorithm outperforms the baselines significantly in accuracy in all the cases, and consumes comparable running time.",
"Preferences have been traditionally studied in philosophy, psychology, and economics and applied to decision making problems. Recently, they have attracted the attention of researchers in other fields, such as databases where they capture soft criteria for queries. Databases bring a whole fresh perspective to the study of preferences, both computational and representational. From a representational perspective, the central question is how we can effectively represent preferences and incorporate them in database querying. From a computational perspective, we can look at how we can efficiently process preferences in the context of database queries. Several approaches have been proposed but a systematic study of these works is missing. The purpose of this survey is to provide a framework for placing existing works in perspective and highlight critical open challenges to serve as a springboard for researchers in database systems. We organize our study around three axes: preference representation, preference composition, and preference query processing.",
"A complete description of an entity is rarely contained in a single data source, but rather, it is often distributed across different data sources. Applications based on personal electronic health records, sentiment analysis, and financial records all illustrate that significant value can be derived from integrated, consistent, and queryable profiles of entities from different sources. Even more so, such integrated profiles are considerably enhanced if temporal information from different sources is carefully accounted for. We develop a simple and yet versatile operator, called prawn, that is typically called as a final step of an entity integration workflow. Prawn is capable of consistently integrating and resolving temporal conflicts in data that may contain multiple dimensions of time based on a set of preference rules specified by a user (hence the name prawn for preference-aware union). In the event that not all conflicts can be resolved through preferences, one can enumerate each possible consistent interpretation of the result returned by prawn at a given time point through a polynomial-delay algorithm. In addition to providing algorithms for implementing prawn, we study and establish several desirable properties of prawn. First, prawn produces the same temporally integrated outcome, modulo representation of time, regardless of the order in which data sources are integrated. Second, prawn can be customized to integrate temporal data for different applications by specifying application-specific preference rules. Third, we show experimentally that our implementation of prawn is feasible on both \"small\" and \"big\" data platforms in that it is efficient in both storage and execution time. Finally, we demonstrate a fundamental advantage of prawn: we illustrate that standard query languages can be immediately used to pose useful temporal queries over the integrated and resolved entity repository."
]
} |
1706.04370 | 2624787968 | The inference of network topologies from relational data is an important problem in data analysis. Exemplary applications include the reconstruction of social ties from data on human interactions, the inference of gene co-expression networks from DNA microarray data, or the learning of semantic relationships based on co-occurrences of words in documents. Solving these problems requires techniques to infer significant links in noisy relational data. In this short paper, we propose a new statistical modeling framework to address this challenge. The framework builds on generalized hypergeometric ensembles, a class of generative stochastic models that give rise to analytically tractable probability spaces of directed, multi-edge graphs. We show how this framework can be used to assess the significance of links in noisy relational data. We illustrate our method in two data sets capturing spatio-temporal proximity relations between actors in a social system. The results show that our analytical framework provides a new approach to infer significant links from relational data, with interesting perspectives for the mining of data on social systems. | Applying predictive analytics techniques, a first set of works studied the problem from the perspective of @cite_20 . In @cite_6 , a supervised learning technique is used to predict of social ties based on unlabeled interactions. The authors of @cite_13 show that tensor factorization techniques allow to infer international relations from data that capture how often two countries co-occur in news reports. In @cite_28 , a link-based latent variable model is used to predict friendship relations using data on social interactions. | {
"cite_N": [
"@cite_28",
"@cite_13",
"@cite_6",
"@cite_20"
],
"mid": [
"2166293769",
"1995368504",
"2022867359",
"2148847267"
],
"abstract": [
"Previous work analyzing social networks has mainly focused on binary friendship relations. However, in online social networks the low cost of link formation can lead to networks with heterogeneous relationship strengths (e.g., acquaintances and best friends mixed together). In this case, the binary friendship indicator provides only a coarse representation of relationship information. In this work, we develop an unsupervised model to estimate relationship strength from interaction activity (e.g., communication, tagging) and user similarity. More specifically, we formulate a link-based latent variable model, along with a coordinate ascent optimization procedure for the inference. We evaluate our approach on real-world data from Facebook and LinkedIn, showing that the estimated link weights result in higher autocorrelation and lead to improved classification accuracy.",
"We present a Bayesian tensor factorization model for inferring latent group structures from dynamic pairwise interaction patterns. For decades, political scientists have collected and analyzed records of the form \"country i took action a toward country j at time t\" - known as dyadic events - in order to form and test theories of international relations. We represent these event data as a tensor of counts and develop Bayesian Poisson tensor factorization to infer a low-dimensional, interpretable representation of their salient patterns. We demonstrate that our model's predictive performance is better than that of standard non-negative tensor factorization methods. We also provide a comparison of our variational updates to their maximum likelihood counterparts. In doing so, we identify a better way to form point estimates of the latent factors than that typically used in Bayesian Poisson matrix factorization. Finally, we showcase our model as an exploratory analysis tool for political scientists. We show that the inferred latent factor matrices capture interpretable multilateral relations that both conform to and inform our knowledge of international a airs.",
"It is well known that different types of social ties have essentially different influence on people. However, users in online social networks rarely categorize their contacts into \"family\", \"colleagues\", or \"classmates\". While a bulk of research has focused on inferring particular types of relationships in a specific social network, few publications systematically study the generalization of the problem of inferring social ties over multiple heterogeneous networks. In this work, we develop a framework for classifying the type of social relationships by learning across heterogeneous networks. The framework incorporates social theories into a factor graph model, which effectively improves the accuracy of inferring the type of social relationships in a target network by borrowing knowledge from a different source network. Our empirical study on five different genres of networks validates the effectiveness of the proposed framework. For example, by leveraging information from a coauthor network with labeled advisor-advisee relationships, the proposed framework is able to obtain an F1-score of 90 (8-28 improvements over alternative methods) for inferring manager-subordinate relationships in an enterprise email network.",
"Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link-prediction problem, and we develop approaches to link prediction based on measures for analyzing the “proximity” of nodes in a network. Experiments on large coauthorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. © 2007 Wiley Periodicals, Inc."
]
} |
1706.04370 | 2624787968 | The inference of network topologies from relational data is an important problem in data analysis. Exemplary applications include the reconstruction of social ties from data on human interactions, the inference of gene co-expression networks from DNA microarray data, or the learning of semantic relationships based on co-occurrences of words in documents. Solving these problems requires techniques to infer significant links in noisy relational data. In this short paper, we propose a new statistical modeling framework to address this challenge. The framework builds on generalized hypergeometric ensembles, a class of generative stochastic models that give rise to analytically tractable probability spaces of directed, multi-edge graphs. We show how this framework can be used to assess the significance of links in noisy relational data. We illustrate our method in two data sets capturing spatio-temporal proximity relations between actors in a social system. The results show that our analytical framework provides a new approach to infer significant links from relational data, with interesting perspectives for the mining of data on social systems. | Using the special characteristics of time-stamped social interactions or geographical co-occurrences, a second line of works has additionally accounted for . Studying data on time-stamped proximities of students at MIT campus, the authors of @cite_17 show that the temporal and spatial distribution of proximity events allows to infer social ties with high accuracy. In @cite_32 , a model that captures location diversity, regularity, intensity and duration is used to predict social ties based on co-location events. An entropy-based approach taking into account the diversity of interactions' locations has been used in @cite_29 . | {
"cite_N": [
"@cite_29",
"@cite_32",
"@cite_17"
],
"mid": [
"2013315566",
"2126895033",
"2166692930"
],
"abstract": [
"The ubiquity of mobile devices and the popularity of location-based-services have generated, for the first time, rich datasets of people's location information at a very high fidelity. These location datasets can be used to study people's behavior - for example, social studies have shown that people, who are seen together frequently at the same place and at the same time, are most probably socially related. In this paper, we are interested in inferring these social connections by analyzing people's location information, which is useful in a variety of application domains from sales and marketing to intelligence analysis. In particular, we propose an entropy-based model (EBM) that not only infers social connections but also estimates the strength of social connections by analyzing people's co-occurrences in space and time. We examine two independent ways: diversity and weighted frequency, through which co-occurrences contribute to social strength. In addition, we take the characteristics of each location into consideration in order to compensate for cases where only limited location information is available. We conducted extensive sets of experiments with real-world datasets including both people's location data and their social connections, where we used the latter as the ground-truth to verify the results of applying our approach to the former. We show that our approach outperforms the competitors.",
"This paper examines the location traces of 489 users of a location sharing social network for relationships between the users' mobility patterns and structural properties of their underlying social network. We introduce a novel set of location-based features for analyzing the social context of a geographic region, including location entropy, which measures the diversity of unique visitors of a location. Using these features, we provide a model for predicting friendship between two users by analyzing their location trails. Our model achieves significant gains over simpler models based only on direct properties of the co-location histories, such as the number of co-locations. We also show a positive relationship between the entropy of the locations the user visits and the number of social ties that user has in the network. We discuss how the offline mobility of users can have implications for both researchers and designers of online social networks.",
"Data collected from mobile phones have the potential to provide insight into the relational dynamics of individuals. This paper compares observational data from mobile phones with standard self-report survey data. We find that the information from these two data sources is overlapping but distinct. For example, self-reports of physical proximity deviate from mobile phone records depending on the recency and salience of the interactions. We also demonstrate that it is possible to accurately infer 95 of friendships based on the observational data alone, where friend dyads demonstrate distinctive temporal and spatial patterns in their physical proximity and calling patterns. These behavioral patterns, in turn, allow the prediction of individual-level outcomes such as job satisfaction."
]
} |
1706.04370 | 2624787968 | The inference of network topologies from relational data is an important problem in data analysis. Exemplary applications include the reconstruction of social ties from data on human interactions, the inference of gene co-expression networks from DNA microarray data, or the learning of semantic relationships based on co-occurrences of words in documents. Solving these problems requires techniques to infer significant links in noisy relational data. In this short paper, we propose a new statistical modeling framework to address this challenge. The framework builds on generalized hypergeometric ensembles, a class of generative stochastic models that give rise to analytically tractable probability spaces of directed, multi-edge graphs. We show how this framework can be used to assess the significance of links in noisy relational data. We illustrate our method in two data sets capturing spatio-temporal proximity relations between actors in a social system. The results show that our analytical framework provides a new approach to infer significant links from relational data, with interesting perspectives for the mining of data on social systems. | Addressing scenarios where neither training data nor spatio-temporal information is available, a third line of works is based on . Such models can be used as for observed dyadic interactions, which help us to assess whether the relations between a given pair of elements occur significantly more often than expected. Existing works in this area typically rely on standard modeling frameworks, such as @cite_33 @cite_22 , or the for graphs with given degree sequence or distribution @cite_16 . On the one hand, these approaches provide statistically principled network inference and learning methods for general relational data @cite_9 @cite_10 @cite_24 @cite_12 . On the other hand, the underlying generative models are often not analytically tractable, thus requiring expensive numerical simulations @cite_33 @cite_10 . Proposing a framework of analytically tractable generative models for directed and undirected multi-edge graphs, in this work we close this research gap. | {
"cite_N": [
"@cite_22",
"@cite_33",
"@cite_9",
"@cite_24",
"@cite_16",
"@cite_10",
"@cite_12"
],
"mid": [
"1892403611",
"2160268549",
"2125209314",
"2045078975",
"2044881936",
"1954262271",
"2620724943"
],
"abstract": [
"We address a fundamental problem that is systematically encountered when modeling real-world complex systems of societ al relevance: the limitedness of the information available. In the case of economic and financial networks, privacy issues severely limit the information that can be accessed and, as a consequence, the possibility of correctly estimating the resilience of these systems to events such as financial shocks, crises and cascade failures. Here we present an innovative method to reconstruct the structure of such partially-accessible systems, based on the knowledge of intrinsic node-specific properties and of the number of connections of only a limited subset of nodes. This information is used to calibrate an inference procedure based on fundamental concepts derived from statistical physics, which allows to generate ensembles of directed weighted networks intended to represent the real system—so that the real network properties can be estimated as their average values within the ensemble. We test the method both on synthetic and empirical networks, focusing on the properties that are commonly used to measure systemic risk. Indeed, the method shows a remarkable robustness with respect to the limitedness of the information available, thus representing a valuable tool for gaining insights on privacy-protected economic and financial systems.",
"This article provides an introductory summary to the formulation and application of exponential random graph models for social networks. The possible ties among nodes of a network are regarded as random variables, and assumptions about dependencies among these random tie variables determine the general form of the exponential random graph model for the network. Examples of different dependence assumptions and their associated models are given, including Bernoulli, dyad-independent and Markov random graph models. The incorporation of actor attributes in social selection models is also reviewed. Newer, more complex dependence assumptions are briefly outlined. Estimation procedures are discussed, including new methods for Monte Carlo maximum likelihood estimation. We foreshadow the discussion taken up in other papers in this special edition: that the homogeneous Markov random graph models of Frank and Strauss [Frank, O., Strauss, D., 1986. Markov graphs. Journal of the American Statistical Association 81, 832–842] are not appropriate for many observed networks, whereas the new model specifications of [Snijders, T.A.B., Pattison, P., Robins, G.L., Handock, M. New specifications for exponential random graph",
"The quantification of the complexity of networks is, today, a fundamental problem in the physics of complex systems. A possible roadmap to solve the problem is via extending key concepts of information theory to networks. In this Rapid Communication we propose how to define the Shannon entropy of a network ensemble and how it relates to the Gibbs and von Neumann entropies of network ensembles. The quantities we introduce here will play a crucial role for the formulation of null models of networks through maximum-entropy arguments and will contribute to inference problems emerging in the field of complex networks.",
"A common and important problem arising in the study of networks is how to divide the vertices of a given network into one or more groups, called communities, in such a way that vertices of the same community are more interconnected than vertices belonging to different ones. We propose and investigate a testing based community detection procedure called Extraction of Statistically Significant Communities (ESSC). The ESSC procedure is based on p-values for the strength of connection between a single vertex and a set of vertices under a reference distribution derived from a conditional configuration network model. The procedure automatically selects both the number of communities in the network and their size. Moreover, ESSC can handle overlapping communities and, unlike the majority of existing methods, identifies “background” vertices that do not belong to a well-defined community. The method has only one parameter, which controls the stringency of the hypothesis tests. We investigate the performance and potential use of ESSC and compare it with a number of existing methods, through a validation study using four real network data sets. In addition, we carry out a simulation study to assess the effectiveness of ESSC in networks with various types of community structure, including networks with overlapping communities and those with background vertices. These results suggest that ESSC is an effective exploratory tool for the discovery of relevant community structure in complex network systems. Data and software are available at http: www.unc.edu jameswd research.html.",
"Given a sequence of nonnegative real numbers λ0, λ1… which sum to 1, we consider random graphs having approximately λi n vertices of degree i. Essentially, we show that if Σ i(i - 2)λi > 0, then such graphs almost surely have a giant component, while if Σ i(i -2)λ. < 0, then almost surely all components in such graphs are small. We can apply these results to Gn,p,Gn.M, and other well-known models of random graphs. There are also applications related to the chromatic number of sparse random graphs. © 1995 Wiley Periodicals, Inc.",
"A substantial volume of research has been devoted to studies of community structure in networks, but communities are not the only possible form of large-scale network structure. Here we describe a broad extension of community structure that encompasses traditional communities but includes a wide range of generalized structural patterns as well. We describe a principled method for detecting this generalized structure in empirical network data and demonstrate with real-world examples how it can be used to learn new things about the shape and meaning of networks.",
"Networks provide an informative, yet non-redundant description of complex systems only if links represent truly dyadic relationships that cannot be directly traced back to node-specific properties such as size, importance, or coordinates in some embedding space. In any real-world network, some links may be reducible, and others irreducible, to such local properties. This dichotomy persists despite the steady increase in data availability and resolution, which actually determines an even stronger need for filtering techniques aimed at discerning essential links from non-essential ones. Here we introduce a rigorous method that, for any desired level of statistical significance, outputs the network backbone that is irreducible to the local properties of nodes, i.e. their degrees and strengths. Unlike previous approaches, our method employs an exact maximum-entropy formulation guaranteeing that the filtered network encodes only the links that cannot be inferred from local information. Extensive empirical analysis confirms that this approach uncovers essential backbones that are otherwise hidden amidst many redundant relationships and inaccessible to other methods. For instance, we retrieve the hub-and-spoke skeleton of the US airport network and many specialised patterns of international trade. Being irreducible to local transportation and economic constraints of supply and demand, these backbones single out genuinely higher-order wiring principles."
]
} |
1706.04388 | 2626816457 | Recent research in image and video recognition indicates that many visual processes can be thought of as being generated by a time-varying generative model. A nearby descriptive model for visual processes is thus a statistical distribution that varies over time. Specifically, modeling visual processes as streams of histograms generated by a kernelized linear dynamic system turns out to be efficient. We refer to such a model as a system of bags. In this paper, we investigate systems of bags with special emphasis on dynamic scenes and dynamic textures. Parameters of linear dynamic systems suffer from ambiguities. In order to cope with these ambiguities in the kernelized setting, we develop a kernelized version of the alignment distance. For its computation, we use a Jacobi-type method and prove its convergence to a set of critical points. We employ it as a dissimilarity measure on Systems of Bags. As such, it outperforms other known dissimilarity measures for kernelized linear dynamic systems, in particular the Martin distance and the Maximum singular value distance, in every tested classification setting. A considerable margin can be observed in settings, where classification is performed with respect to an abstract mean of video sets. For this scenario, the presented approach can outperform the state-of-the-art techniques, such as dynamic fractal spectrum or orthogonal tensor dictionary learning. | The temporal evolution of histograms can not be well described by linear dynamic systems. Instead, we employ the concept of (KLDS) which model the observations in a kernel feature space. The parameters describing the KLDS are the representations of the visual processes employed in his work. Using these descriptors in the context of supervised learning requires the definition of a dissimilarity measure. The available literature on KLDSs offers a manageable number of dissimilarity measures that perform insufficiently on the SoB descriptors used in this work, as will be shown in the experimental Section , which may be due to the fact that they put too much emphasis on the dynamic part of the KLDS parameters, whereas in the setting discussed in this work, static information, i.e. the information not related to the temporal context has a considerable significance. The KLDS itself was introduced in @cite_11 and motivated by the recognition of dynamic textures. As a dissimilarity measure, a kernelized version of the widely adopted Martin distance @cite_20 was applied. | {
"cite_N": [
"@cite_20",
"@cite_11"
],
"mid": [
"1973532874",
"2160144769"
],
"abstract": [
"Abstract We define a notion of subspace angles between two linear, autoregressive moving average, single-input–single-output models by considering the principal angles between subspaces that are derived from these models. We show how a recently defined metric for these models, which is based on their cepstra, relates to the subspace angles between the models.",
"The dynamic texture is a stochastic video model that treats the video as a sample from a linear dynamical system. The simple model has been shown to be surprisingly useful in domains such as video synthesis, video segmentation, and video classification. However, one major disadvantage of the dynamic texture is that it can only model video where the motion is smooth, i.e. video textures where the pixel values change smoothly. In this work, we propose an extension of the dynamic texture to address this issue. Instead of learning a linear observation function with PCA, we learn a non-linear observation function using kernel-PCA. The resulting kernel dynamic texture is capable of modeling a wider range of video motion, such as chaotic motion (e.g. turbulent water) or camera motion (e.g. panning). We derive the necessary steps to compute the Martin distance between kernel dynamic textures, and then validate the new model through classification experiments on video containing camera motion."
]
} |
1706.04388 | 2626816457 | Recent research in image and video recognition indicates that many visual processes can be thought of as being generated by a time-varying generative model. A nearby descriptive model for visual processes is thus a statistical distribution that varies over time. Specifically, modeling visual processes as streams of histograms generated by a kernelized linear dynamic system turns out to be efficient. We refer to such a model as a system of bags. In this paper, we investigate systems of bags with special emphasis on dynamic scenes and dynamic textures. Parameters of linear dynamic systems suffer from ambiguities. In order to cope with these ambiguities in the kernelized setting, we develop a kernelized version of the alignment distance. For its computation, we use a Jacobi-type method and prove its convergence to a set of critical points. We employ it as a dissimilarity measure on Systems of Bags. As such, it outperforms other known dissimilarity measures for kernelized linear dynamic systems, in particular the Martin distance and the Maximum singular value distance, in every tested classification setting. A considerable margin can be observed in settings, where classification is performed with respect to an abstract mean of video sets. For this scenario, the presented approach can outperform the state-of-the-art techniques, such as dynamic fractal spectrum or orthogonal tensor dictionary learning. | Modeling visual processes as SoBs i.e. KLDSs of histograms was employed in @cite_13 for the classification of human actions. The authors modeled videos of human actions as streams of histograms of optical flow (HOOF). As a similarity measure, a Binet-Cauchy Maximum Singular Value kernel was applied. The work was further enhanced in @cite_26 , where human interactions were targeted. More generally, bags and histograms as representations of samples of multidimensional time signals were used both for human action @cite_32 and dynamic scene @cite_18 recognition, but the temporal order was neglected in both cases. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_13",
"@cite_32"
],
"mid": [
"2006656585",
"2275363520",
"2128730107",
"1979378996"
],
"abstract": [
"This paper presents a unified bag of visual word (BoW) framework for dynamic scene recognition. The approach builds on primitive features that uniformly capture spatial and temporal orientation structure of the imagery (e.g., video), as extracted via application of a bank of spatiotemporally oriented filters. Various feature encoding techniques are investigated to abstract the primitives to an intermediate representation that is best suited to dynamic scene representation. Further, a novel approach to adaptive pooling of the encoded features is presented that captures spatial layout of the scene even while being robust to situations where camera motion and scene dynamics are confounded. The resulting overall approach has been evaluated on two standard, publically available dynamic scene datasets. The results show that in comparison to a representative set of alternatives, the proposed approach outperforms the previous state-of-the-art in classification accuracy by 10 .",
"In this paper we model binary people interactions by forming temporal interaction trajectories, under the form of a time series, coupling together the body motion of each individual as well as their proximity relationships. Such trajectories are modeled with a non-linear dynamical system (NLDS). We develop a framework that entails the use of so-called pairwise kernels, able to compare interaction trajectories in the space of NLDS. To do so we address the problem of modeling the Riemannian structure of the trajectory space, and we also prove that kernels have to satisfy certain symmetry properties, which are peculiar of this interaction modeling framework. Experiment results show that this approach is quite promising, as it is able to match and improve state-of-the-art classification and retrieval accuracies on two human interaction datasets.",
"System theoretic approaches to action recognition model the dynamics of a scene with linear dynamical systems (LDSs) and perform classification using metrics on the space of LDSs, e.g. Binet-Cauchy kernels. However, such approaches are only applicable to time series data living in a Euclidean space, e.g. joint trajectories extracted from motion capture data or feature point trajectories extracted from video. Much of the success of recent object recognition techniques relies on the use of more complex feature descriptors, such as SIFT descriptors or HOG descriptors, which are essentially histograms. Since histograms live in a non-Euclidean space, we can no longer model their temporal evolution with LDSs, nor can we classify them using a metric for LDSs. In this paper, we propose to represent each frame of a video using a histogram of oriented optical flow (HOOF) and to recognize human actions by classifying HOOF time-series. For this purpose, we propose a generalization of the Binet-Cauchy kernels to nonlinear dynamical systems (NLDS) whose output lives in a non-Euclidean space, e.g. the space of histograms. This can be achieved by using kernels defined on the original non-Euclidean space, leading to a well-defined metric for NLDSs. We use these kernels for the classification of actions in video sequences using (HOOF) as the output of the NLDS. We evaluate our approach to recognition of human actions in several scenarios and achieve encouraging results.",
"HighlightsWe propose a new representation of human actions that is extremely interpretable.We represent a given action as a sequence of the most informative joints (SMIJ).SMIJ is successful at capturing the invariances in different human actions.SMIJ outperforms standard methods for action recognition task from skelet al data.SMIJ is resilient to dataset bias and generalizes well across different datasets. Much of the existing work on action recognition combines simple features with complex classifiers or models to represent an action. Parameters of such models usually do not have any physical meaning nor do they provide any qualitative insight relating the action to the actual motion of the body or its parts. In this paper, we propose a new representation of human actions called sequence of the most informative joints (SMIJ), which is extremely easy to interpret. At each time instant, we automatically select a few skelet al joints that are deemed to be the most informative for performing the current action based on highly interpretable measures such as the mean or variance of joint angle trajectories. We then represent the action as a sequence of these most informative joints. Experiments on multiple databases show that the SMIJ representation is discriminative for human action recognition and performs better than several state-of-the-art algorithms."
]
} |
1706.04388 | 2626816457 | Recent research in image and video recognition indicates that many visual processes can be thought of as being generated by a time-varying generative model. A nearby descriptive model for visual processes is thus a statistical distribution that varies over time. Specifically, modeling visual processes as streams of histograms generated by a kernelized linear dynamic system turns out to be efficient. We refer to such a model as a system of bags. In this paper, we investigate systems of bags with special emphasis on dynamic scenes and dynamic textures. Parameters of linear dynamic systems suffer from ambiguities. In order to cope with these ambiguities in the kernelized setting, we develop a kernelized version of the alignment distance. For its computation, we use a Jacobi-type method and prove its convergence to a set of critical points. We employ it as a dissimilarity measure on Systems of Bags. As such, it outperforms other known dissimilarity measures for kernelized linear dynamic systems, in particular the Martin distance and the Maximum singular value distance, in every tested classification setting. A considerable margin can be observed in settings, where classification is performed with respect to an abstract mean of video sets. For this scenario, the presented approach can outperform the state-of-the-art techniques, such as dynamic fractal spectrum or orthogonal tensor dictionary learning. | The novelty of this work is to explore the framework of SoB in the context of dynamic scene and large-scale dynamic texture recognition and to develop an appropriate dissimilarity measure. To this end, we adopt the framework of the from @cite_3 and develop a kernelized version suitable for the comparison of SoBs. Unlike the mentioned dissimilarity measures for SoBs, impact of the static and dynamic components can be chosen depending on the employed generative image model. Besides, its property of being the square of a metric and its simple definition based on the Frobenius distance allows for the definition of @cite_14 . This is crucial for the classification via the (NCC) @cite_35 . A part of this work is dedicated to the computation of abstract means of sets of KLDSs for avoiding the memory burden of (1-NN) classification. | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_3"
],
"mid": [
"2001937638",
"2217701558",
"1972390340"
],
"abstract": [
"The research context of this article is the recognition and description of dynamic textures. In image processing, the wavelet transform has been successfully used for characterizing static textures. To our best knowledge, only two works are using spatio-temporal multiscale decomposition based on the tensor product for dynamic texture recognition. One contribution of this article is to analyze and compare the ability of the 2D+T curvelet transform, a geometric multiscale decomposition, for characterizing dynamic textures in image sequences. Two approaches using the 2D+T curvelet transform are presented and compared using three new large databases. A second contribution is the construction of these three publicly available benchmarks of increasing complexity. Existing benchmarks are either too small not available or not always constructed using a reference database. Feature vectors used for recognition are described as well as their relevance, and performances of the different methods are discussed. Finally, future prospects are exposed.",
"",
"We introduce a framework for defining a distance on the (non-Euclidean) space of Linear Dynamical Systems (LDSs). The proposed distance is induced by the action of the group of orthogonal matrices on the space of statespace realizations of LDSs. This distance can be efficiently computed for large-scale problems, hence it is suitable for applications in the analysis of dynamic visual scenes and other high dimensional time series. Based on this distance we devise a simple LDS averaging algorithm, which can be used for classification and clustering of time-series data. We test the validity as well as the performance of our group-action based distance on synthetic as well as real data and provide comparison with state-of-the-art methods."
]
} |
1706.04388 | 2626816457 | Recent research in image and video recognition indicates that many visual processes can be thought of as being generated by a time-varying generative model. A nearby descriptive model for visual processes is thus a statistical distribution that varies over time. Specifically, modeling visual processes as streams of histograms generated by a kernelized linear dynamic system turns out to be efficient. We refer to such a model as a system of bags. In this paper, we investigate systems of bags with special emphasis on dynamic scenes and dynamic textures. Parameters of linear dynamic systems suffer from ambiguities. In order to cope with these ambiguities in the kernelized setting, we develop a kernelized version of the alignment distance. For its computation, we use a Jacobi-type method and prove its convergence to a set of critical points. We employ it as a dissimilarity measure on Systems of Bags. As such, it outperforms other known dissimilarity measures for kernelized linear dynamic systems, in particular the Martin distance and the Maximum singular value distance, in every tested classification setting. A considerable margin can be observed in settings, where classification is performed with respect to an abstract mean of video sets. For this scenario, the presented approach can outperform the state-of-the-art techniques, such as dynamic fractal spectrum or orthogonal tensor dictionary learning. | The computation of alignment distances involves modeling the appearance of a visual process as an equivalence class of points on a Stiefel manifold. This is closely related to Grassmann based models. The authors of @cite_38 model visual processes as points on kernelized Grassmann manifolds, while our approach employs kernelized Stiefel manifolds in a similar manner. In particular, the authors propose to model the spaces of video frames, or of temporally localized features extracted from them, as sparse codes via points on Grassmann manifolds. Furthermore, a kernel dictionary learning approach for dynamic texture recognition was employed in @cite_17 . | {
"cite_N": [
"@cite_38",
"@cite_17"
],
"mid": [
"2142040002",
"2461069268"
],
"abstract": [
"Recent advances in computer vision and machine learning suggest that a wide range of problems can be addressed more appropriately by considering non-Euclidean geometry. In this paper we explore sparse dictionary learning over the space of linear subspaces, which form Riemannian structures known as Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into the space of symmetric matrices by an isometric mapping, which enables us to devise a closed-form solution for updating a Grassmann dictionary, atom by atom. Furthermore, to handle non-linearity in data, we propose a kernelised version of the dictionary learning algorithm. Experiments on several classification tasks (face recognition, action recognition, dynamic texture classification) show that the proposed approach achieves considerable improvements in discrimination accuracy, in comparison to state-of-the-art methods such as kernelised Affine Hull Method and graph-embedding Grassmann discriminant analysis.",
"Most existing dictionary learning algorithms consider a linear sparse model, which often cannot effectively characterize the nonlinear properties present in many types of visual data, e.g. dynamic texture (DT). Such nonlinear properties can be exploited by the so-called kernel sparse coding. This paper proposed an equiangular kernel dictionary learning method with optimal mutual coherence to exploit the nonlinear sparsity of high-dimensional visual data. Two main issues are addressed in the proposed method: (1) coding stability for redundant dictionary of infinite-dimensional space, and (2) computational efficiency for computing kernel matrix of training samples of high-dimensional data. The proposed kernel sparse coding method is applied to dynamic texture analysis with both local DT pattern extraction and global DT pattern characterization. The experimental results showed its performance gain over existing methods."
]
} |
1706.04097 | 2626994141 | Non-negative matrix factorization is a basic tool for decomposing data into the feature and weight matrices under non-negativity constraints, and in practice is often solved in the alternating minimization framework. However, it is unclear whether such algorithms can recover the ground-truth feature matrix when the weights for different features are highly correlated, which is common in applications. This paper proposes a simple and natural alternating gradient descent based algorithm, and shows that with a mild initialization it provably recovers the ground-truth in the presence of strong correlations. In most interesting cases, the correlation can be in the same order as the highest possible. Our analysis also reveals its several favorable features including robustness to noise. We complement our theoretical results with empirical studies on semi-synthetic datasets, demonstrating its advantage over several popular methods in recovering the ground-truth. | Non-negative matrix factorization has a rich empirical history, starting with the practical algorithms of @cite_16 @cite_5 @cite_10 . It has been widely used in applications and there exist various methods for NMF, e.g., @cite_1 @cite_10 @cite_18 @cite_3 @cite_2 . However, they do not have provable recovery guarantees. | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_10"
],
"mid": [
"1594523130",
"",
"2951030277",
"168764236",
"1902027874",
"2148365208",
"2135029798"
],
"abstract": [
"In the paper we present new Alternating Least Squares (ALS) algorithms for Nonnegative Matrix Factorization (NMF) and their extensions to 3D Nonnegative Tensor Factorization (NTF) that are robust in the presence of noise and have many potential applications, including multi-way Blind Source Separation (BSS), multi-sensory or multi-dimensional data analysis, and nonnegative neural sparse coding. We propose to use local cost functions whose simultaneous or sequential (one by one) minimization leads to a very simple ALS algorithm which works under some sparsity constraints both for an under-determined (a system which has less sensors than sources) and overdetermined model. The extensive experimental results confirm the validity and high performance of the developed algorithms, especially with usage of the multi-layer hierarchical NMF. Extension of the proposed algorithm to multidimensional Sparse Component Analysis and Smooth Component Analysis is also proposed.",
"",
"We present algorithms for topic modeling based on the geometry of cross-document word-frequency patterns. This perspective gains significance under the so called separability condition. This is a condition on existence of novel-words that are unique to each topic. We present a suite of highly efficient algorithms based on data-dependent and random projections of word-frequency patterns to identify novel words and associated topics. We will also discuss the statistical guarantees of the data-dependent projections method based on two mild assumptions on the prior density of topic document matrix. Our key insight here is that the maximum and minimum values of cross-document frequency patterns projected along any direction are associated with novel words. While our sample complexity bounds for topic recovery are similar to the state-of-art, the computational complexity of our random projection scheme scales linearly with the number of documents and the number of words per document. We present several experiments on synthetic and real-world datasets to demonstrate qualitative and quantitative merits of our scheme.",
"Topic modeling for large-scale distributed web-collections requires distributed techniques that account for both computational and communication costs. We consider topic modeling under the separability assumption and develop novel computationally efficient methods that provably achieve the statistical performance of the state-of-the-art centralized approaches while requiring insignificant communication between the distributed document collections. We achieve tradeoffs between communication and computation without actually transmitting the documents. Our scheme is based on exploiting the geometry of normalized word-word cooccurrence matrix and viewing each row of this matrix as a vector in a high-dimensional space. We relate the solid angle subtended by extreme points of the convex hull of these vectors to topic identities and construct distributed schemes to identify topics.",
"Is perception of the whole based on perception of its parts? There is psychological1 and physiological2,3 evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations4,5. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.",
"Unsupervised learning algorithms based on convex and conic encoders are proposed. The encoders find the closest convex or conic combination of basis vectors to the input. The learning algorithms produce basis vectors that minimize the reconstruction error of the encoders. The convex algorithm develops locally linear models of the input, while the conic algorithm discovers features. Both algorithms are used to model handwritten digits and compared with vector quantization and principal component analysis. The neural network implementations involve feedback connections that project a reconstruction back to the input layer.",
"Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence."
]
} |
1706.04097 | 2626994141 | Non-negative matrix factorization is a basic tool for decomposing data into the feature and weight matrices under non-negativity constraints, and in practice is often solved in the alternating minimization framework. However, it is unclear whether such algorithms can recover the ground-truth feature matrix when the weights for different features are highly correlated, which is common in applications. This paper proposes a simple and natural alternating gradient descent based algorithm, and shows that with a mild initialization it provably recovers the ground-truth in the presence of strong correlations. In most interesting cases, the correlation can be in the same order as the highest possible. Our analysis also reveals its several favorable features including robustness to noise. We complement our theoretical results with empirical studies on semi-synthetic datasets, demonstrating its advantage over several popular methods in recovering the ground-truth. | For theoretical analysis, @cite_8 provided a fixed-parameter tractable algorithm for NMF using algebraic equations. They also provided matching hardness results: namely they show there is no algorithm running in time @math unless there is a sub-exponential running time algorithm for 3-SAT. @cite_8 also studied NMF under separability assumptions about the features, and @cite_19 studied NMF under related assumptions. The most related work is @cite_22 , which analyzed an alternating minimization type algorithm. However, the result only holds with strong assumptions about the distribution of the weight @math , in particular, with the assumption that the coordinates of @math are independent. | {
"cite_N": [
"@cite_19",
"@cite_22",
"@cite_8"
],
"mid": [
"2464779257",
"",
"2111604514"
],
"abstract": [
"The Noisy Non-negative Matrix factorization (NMF) is: given a data matrix A (d × n), find non-negative matrices B, C (d × k, k × n respy.) so that A = BC + N, where N is a noise matrix. Existing polynomial time algorithms with proven error guarantees require each column N.,j to have l1 norm much smaller than ||(BC).,j||1, which could be very restrictive. In important applications of NMF such as Topic Modeling as well as theoretical noise models (eg. Gaussian with high σ), almost every column of N.j violates this condition. We introduce the heavy noise model which only requires the average noise over large subsets of columns to be small. We initiate a study of Noisy NMF under the heavy noise model. We show that our noise model subsumes noise models of theoretical and practical interest (for eg. Gaussian noise of maximum possible σ). We then devise an algorithm TSVDNMF which under certain assumptions on B,C, solves the problem under heavy noise. Our error guarantees match those of previous algorithms. Our running time of O((n + d)2k) is substantially better than the O(n3d) for the previous best. Our assumption on B is weaker than the \"Separability\" assumption made by all previous results. We provide empirical justification for our assumptions on C. We also provide the first proof of identifiability (uniqueness of B) for noisy NMF which is not based on separability and does not use hard to check geometric conditions. Our algorithm outperforms earlier polynomial time algorithms both in time and error, particularly in the presence of high noise.",
"",
"The Nonnegative Matrix Factorization (NMF) problem has a rich history spanning quantum mechanics, probability theory, data analysis, polyhedral combinatorics, communication complexity, demography, chemometrics, etc. In the past decade NMF has become enormously popular in machine learning, where the factorization is computed using a variety of local search heuristics. Vavasis recently proved that this problem is NP-complete. We initiate a study of when this problem is solvable in polynomial time. Consider a nonnegative m x n matrix @math and a target inner-dimension r. Our results are the following: - We give a polynomial-time algorithm for exact and approximate NMF for every constant r. Indeed NMF is most interesting in applications precisely when r is small. We complement this with a hardness result, that if exact NMF can be solved in time (nm)o(r), 3-SAT has a sub-exponential time algorithm. Hence, substantial improvements to the above algorithm are unlikely. - We give an algorithm that runs in time polynomial in n, m and r under the separablity condition identified by Donoho and Stodden in 2003. The algorithm may be practical since it is simple and noise tolerant (under benign assumptions). Separability is believed to hold in many practical settings. To the best of our knowledge, this last result is the first polynomial-time algorithm that provably works under a non-trivial condition on the input matrix and we believe that this will be an interesting and important direction for future work."
]
} |
1706.04097 | 2626994141 | Non-negative matrix factorization is a basic tool for decomposing data into the feature and weight matrices under non-negativity constraints, and in practice is often solved in the alternating minimization framework. However, it is unclear whether such algorithms can recover the ground-truth feature matrix when the weights for different features are highly correlated, which is common in applications. This paper proposes a simple and natural alternating gradient descent based algorithm, and shows that with a mild initialization it provably recovers the ground-truth in the presence of strong correlations. In most interesting cases, the correlation can be in the same order as the highest possible. Our analysis also reveals its several favorable features including robustness to noise. We complement our theoretical results with empirical studies on semi-synthetic datasets, demonstrating its advantage over several popular methods in recovering the ground-truth. | Topic modeling is a popular generative model for text data @cite_4 @cite_12 . Usually, the model results in NMF type optimization problems with @math , and a popular heuristic is , which can be regarded as alternating minimization in KL-divergence. Recently, there is a line of theoretical work analyzing tensor decomposition @cite_14 @cite_0 @cite_20 or combinatorial methods @cite_13 . These either need strong structural assumptions on the word-topic matrix @math , or need to know the distribution of the weight @math , which is usually infeasible in applications. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_0",
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"2952389066",
"1880262756",
"2105617746",
"",
"2138107145",
""
],
"abstract": [
"Topic Modeling is an approach used for automatic comprehension and classification of data in a variety of settings, and perhaps the canonical application is in uncovering thematic structure in a corpus of documents. A number of foundational works both in machine learning and in theory have suggested a probabilistic model for documents, whereby documents arise as a convex combination of (i.e. distribution on) a small number of topic vectors, each topic vector being a distribution on words (i.e. a vector of word-frequencies). Similar models have since been used in a variety of application areas; the Latent Dirichlet Allocation or LDA model of is especially popular. Theoretical studies of topic modeling focus on learning the model's parameters assuming the data is actually generated from it. Existing approaches for the most part rely on Singular Value Decomposition(SVD), and consequently have one of two limitations: these works need to either assume that each document contains only one topic, or else can only recover the span of the topic vectors instead of the topic vectors themselves. This paper formally justifies Nonnegative Matrix Factorization(NMF) as a main tool in this context, which is an analog of SVD where all vectors are nonnegative. Using this tool we give the first polynomial-time algorithm for learning topic models without the above two limitations. The algorithm uses a fairly mild assumption about the underlying topic matrix called separability, which is usually found to hold in real-life data. A compelling feature of our algorithm is that it generalizes to models that incorporate topic-topic correlations, such as the Correlated Topic Model and the Pachinko Allocation Model. We hope that this paper will motivate further theoretical results that use NMF as a replacement for SVD - just as NMF has come to replace SVD in many applications.",
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.",
"Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster.",
"",
"Probabilistic topic modeling provides a suite of tools for the unsupervised analysis of large collections of documents. Topic modeling algorithms can uncover the underlying themes of a collection and decompose its documents according to those themes. This analysis can be used for corpus exploration, document search, and a variety of prediction problems. In this tutorial, I will review the state-of-the-art in probabilistic topic models. I will describe the three components of topic modeling: (1) Topic modeling assumptions (2) Algorithms for computing with topic models (3) Applications of topic models In (1), I will describe latent Dirichlet allocation (LDA), which is one of the simplest topic models, and then describe a variety of ways that we can build on it. These include dynamic topic models, correlated topic models, supervised topic models, author-topic models, bursty topic models, Bayesian nonparametric topic models, and others. I will also discuss some of the fundamental statistical ideas that are used in building topic models, such as distributions on the simplex, hierarchical Bayesian modeling, and models of mixed-membership. In (2), I will review how we compute with topic models. I will describe approximate posterior inference for directed graphical models using both sampling and variational inference, and I will discuss the practical issues and pitfalls in developing these algorithms for topic models. Finally, I will describe some of our most recent work on building algorithms that can scale to millions of documents and documents arriving in a stream. In (3), I will discuss applications of topic models. These include applications to images, music, social networks, and other data in which we hope to uncover hidden patterns. I will describe some of our recent work on adapting topic modeling algorithms to collaborative filtering, legislative modeling, and bibliometrics without citations. Finally, I will discuss some future directions and open research problems in topic models.",
""
]
} |
1706.04052 | 2605226555 | Monte Carlo tree search (MCTS) is extremely popular in computer Go which determines each action by enormous simulations in a broad and deep search tree. However, human experts select most actions by pattern analysis and careful evaluation rather than brute search of millions of future nteractions. In this paper, we propose a computer Go system that follows experts way of thinking and playing. Our system consists of two parts. The first part is a novel deep alternative neural network (DANN) used to generate candidates of next move. Compared with existing deep convolutional neural network (DCNN), DANN inserts recurrent layer after each convolutional layer and stacks them in an alternative manner. We show such setting can preserve more contexts of local features and its evolutions which are beneficial for move prediction. The second part is a long-term evaluation (LTE) module used to provide a reliable evaluation of candidates rather than a single probability from move predictor. This is consistent with human experts nature of playing since they can foresee tens of steps to give an accurate estimation of candidates. In our system, for each candidate, LTE calculates a cumulative reward after several future interactions when local variations are settled. Combining criteria from the two parts, our system determines the optimal choice of next move. For more comprehensive experiments, we introduce a new professional Go dataset (PGD), consisting of 253233 professional records. Experiments on GoGoD and PGD datasets show the DANN can substantially improve performance of move prediction over pure DCNN. When combining LTE, our system outperforms most relevant approaches and open engines based on MCTS. | It is a best-first search method based on randomized explorations of search space, which does not require a positional evaluation function @cite_30 . Using the results of previous explorations, the algorithm gradually grows a game tree, and successively becomes better at accurately estimating the values of the optimal moves @cite_4 @cite_10 . Such programs have led to strong amateur level performance, but a considerable gap still remains between top professionals and the strongest computer programs. The majority of recent progress is due to increased quantity and quality of prior knowledge, which is used to bias the search towards more promising states, and it is widely believed that this knowledge is the major bottleneck towards further progress. The first successful current Go program @cite_22 was based on MCTS. Their basic algorithm was augmented in MoGo @cite_28 to leverage prior knowledge to bootstrap value estimates in the search tree. Training with professionals' moves was enhanced in Fuego @cite_18 and Pachi @cite_15 and achieved strong amateur level. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_28",
"@cite_15",
"@cite_10"
],
"mid": [
"2126316555",
"2101101673",
"1551466210",
"1625390266",
"",
"202421935",
"1714211023"
],
"abstract": [
"Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.",
"FUEGO is both an open-source software framework and a state-of-the-art program that plays the game of Go. The framework supports developing game engines for full-information two-player board games, and is used successfully in a substantial number of projects. The FUEGO Go program became the first program to win a game against a top professional player in 9 × 9 Go. It has won a number of strong tournaments against other programs, and is competitive for 19 × 19 as well. This paper gives an overview of the development and current state of the FUEGO project. It describes the reusable components of the software framework and specific algorithms used in the Go engine.",
"We describe two Go programs, Olga and Oleg, developed by a Monte-Carlo approach that is simpler than Bruegmann’s (1993) approach. Our method is based on Abramson (1990). We performed experiments, to assess ideas on (1) progressive pruning, (2) all moves as first heuristic, (3) temperature, (4) simulated annealing, and (5) depth-two tree search within the Monte-Carlo framework. Progressive pruning and the all moves as first heuristic are good speed-up enhancements that do not deteriorate the level of the program too much. Then, using a constant temperature is an adequate and simple heuristic that is about as good as simulated annealing. The depth-two heuristic gives deceptive results at the moment. The results of our Monte-Carlo programs against knowledge-based programs on 9x9 boards are promising. Finally, the ever-increasing power of computers lead us to think that Monte-Carlo approaches are worth considering for computer Go in the future.",
"For large state-space Markovian Decision Problems Monte-Carlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.",
"",
"We present a state of the art implementation of the Monte Carlo Tree Search algorithm for the game of Go. Our Pachi software is currently one of the strongest open source Go programs, competing at the top level with other programs and playing evenly against advanced human players. We describe our implementation and choice of published algorithms as well as three notable original improvements: (1) an adaptive time control algorithm, (2) dynamic komi, and (3) the usage of the criticality statistic. We also present new methods to achieve efficient scaling both in terms of multiple threads and multiple machines in a cluster.",
"A Monte-Carlo evaluation consists in estimating a position by averaging the outcome of several random continuations. The method can serve as an evaluation function at the leaves of a min-max tree. This paper presents a new framework to combine tree search with Monte-Carlo evaluation, that does not separate between a min-max phase and a Monte-Carlo phase. Instead of backing-up the min-max value close to the root, and the average value at some depth, a more general backup operator is defined that progressively changes from averaging to minmax as the number of simulations grows. This approach provides a finegrained control of the tree growth, at the level of individual simulations, and allows efficient selectivity. The resulting algorithm was implemented in a 9 × 9 Go-playing program, Crazy Stone, that won the 10th KGS computer-Go tournament."
]
} |
1706.04208 | 2950892788 | One of the main challenges in reinforcement learning (RL) is generalisation. In typical deep RL methods this is achieved by approximating the optimal value function with a low-dimensional representation using a deep network. While this approach works well in many domains, in domains where the optimal value function cannot easily be reduced to a low-dimensional representation, learning can be very slow and unstable. This paper contributes towards tackling such challenging domains, by proposing a new method, called Hybrid Reward Architecture (HRA). HRA takes as input a decomposed reward function and learns a separate value function for each component reward function. Because each component typically only depends on a subset of all features, the corresponding value function can be approximated more easily by a low-dimensional representation, enabling more effective learning. We demonstrate HRA on a toy-problem and the Atari game Ms. Pac-Man, where HRA achieves above-human performance. | Reward function decomposition has been studied among others by @cite_9 and @cite_3 . This earlier work focusses on strategies that achieve optimal behavior. Our work is aimed at improving learning-efficiency by using simpler value functions and relaxing optimality requirements. | {
"cite_N": [
"@cite_9",
"@cite_3"
],
"mid": [
"2109910161",
"72400652"
],
"abstract": [
"Learning, planning, and representing knowledge at multiple levels of temporal ab- straction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforce- ment learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for taking ac- tion over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as mus- cle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning frame- work in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic pro- gramming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: 1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, 2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and 3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macro-utility problem.",
"We present a new algorithm, GM-Sarsa(O), for finding approximate solutions to multiple-goal reinforcement learning problems that are modeled as composite Markov decision processes. According to our formulation different sub-goals are modeled as MDPs that are coupled by the requirement that they share actions. Existing reinforcement learning algorithms address similar problem formulations by first finding optimal policies for the component MDPs, and then merging these into a policy for the composite task. The problem with such methods is that policies that are optimized separately may or may not perform well when they are merged into a composite solution. Instead of searching for optimal policies for the component MDPs in isolation, our approach finds good policies in the context of the composite task."
]
} |
1706.04206 | 2625372621 | This paper advances the state of the art in text understanding of medical guidelines by releasing two new annotated clinical guidelines datasets, and establishing baselines for using machine learning to extract condition-action pairs. In contrast to prior work that relies on manually created rules, we report experiment with several supervised machine learning techniques to classify sentences as to whether they express conditions and actions. We show the limitations and possible extensions of this work on text mining of medical guidelines. | Research on CIGs started about 20 years ago and became more popular in the late-1990s and early 2000s. Different approaches have been developed to represent and execute clinical guidelines over patient-specific clinical data. They include document-centric models, decision trees and probabilistic models, and "Task-Network Models"(TNMs) @cite_5 , which represent guideline knowledge in hierarchical structures containing networks of clinical actions and decisions that unfold over time. Serban et. al @cite_2 developed a methodology for extracting and using linguistic patterns in guideline formalization, to aid the human modellers in guideline formalization and reduce the human modelling effort. Kaiser et. al @cite_8 developed a method to identify activities to be performed during a treatment which are described in a guideline document. They used relations of the UMLS Semantic Network @cite_6 to identify these activities in a guideline document. Wenzina and Kaiser @cite_4 developed a rule-based method to automatically identifying conditional activities in guideline documents.They achieved a recall of 75 | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_6",
"@cite_2",
"@cite_5"
],
"mid": [
"2198508096",
"89803584",
"141694252",
"",
"1603806745"
],
"abstract": [
"Translating clinical practice guidelines into a computer-interpretable format is a challenging and laborious task. In this project we focus on supporting the early steps of the modeling process by automatically identifying conditional activities in guideline documents in order to model them automatically in further consequence. Therefore, we developed a rule-based, heuristic method that combines domain-independent information extraction rules and semantic pattern rules. The classification also uses a weighting coefficient to verify the relevance of the sentence in the context of other information aspects, such as effects, intentions, etc. Our evaluation results show that even with a small set of training data, we achieved a recall of 75 and a precision of 88 . This outcome shows that this method supports the modeling task and eases the translation of CPGs into a semi-formal model.",
"Clinical practice guidelines are important instruments to support clinical care. In this work we analysed how activities are formulated in these documents and we tried to represent the activities using patterns based on semantic relations. For this we used the Unified Medical Language System (UMLS) and in particular its Semantic Network. Out of it we generated a collection of semantic patterns that can be used to automatically identify activities. In a study we showed that these semantic patterns can cover a large part of the control flow. Using such patterns cannot only support the modelling of computer-interpretable clinical practice guidelines, but can also improve the general comprehension which treatment procedures have to be accomplished. This can also lead to improved compliance of clinical practice guidelines.",
"Abstract The UMLS network of semantic types is one component of NLM's evolving Unified Medical Language System. This paper discusses the role of the semantic network in the overall system, then describes the evolution and current status of the network, and, finally, concludes with a discussion of plans for further development.",
"",
"Objectives: Many groups are developing computer-interpretable clinical guidelines (CIGs) for use during clinical encounters. CIGs use “Task-Network Models” for representation but differ in their approaches to addressing particular modeling challenges. We have studied similarities and differences between CIGs in order to identify issues that must be resolved before a consensus on a set of common components can be developed. @PARASPLIT Design: We compared six models: Asbru, EON, GLIF, GUIDE, PRODIGY, and PRO form a. Collaborators from groups that created these models represented, in their own formalisms, portions of two guidelines: the American College of Physicians–American Society of Internal Medicine's guideline for managing chronic cough and the Sixth Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure. @PARASPLIT Measurements: We compared the models according to eight components that capture the structure of CIGs. The components enable modelers to encode guidelines as plans that organize decision and action tasks in networks. They also enable the encoded guidelines to be linked with patient data—a key requirement for enabling patient-specific decision support. @PARASPLIT Results: We found consensus on many components, including plan organization, expression language, conceptual medical record model, medical concept model, and data abstractions. Differences were most apparent in underlying decision models, goal representation, use of scenarios, and structured medical actions. @PARASPLIT Conclusion: We identified guideline components that the CIG community could adopt as standards. Some of the participants are pursuing standardization of these components under the auspices of HL7."
]
} |
1706.04115 | 2624677889 | We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relation-extraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task. | Open information extraction (open IE) @cite_0 is a schemaless approach for extracting facts from text. While open IE systems need no relation-specific training data, they often treat different phrasings as different relations. In this work, we hope to extract a canonical slot value independent of how the original text is phrased. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2250635077"
],
"abstract": [
"Models that learn to represent textual and knowledge base relations in the same continuous latent space are able to perform joint inferences among the two kinds of relations and obtain high accuracy on knowledge base completion (, 2013). In this paper we propose a model that captures the compositional structure of textual relations, and jointly optimizes entity, knowledge base, and textual relation representations. The proposed model significantly improves performance over a model that does not share parameters among textual relations with common sub-structure."
]
} |
1706.04115 | 2624677889 | We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relation-extraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task. | Universal schema @cite_10 represents open IE extractions and knowledge-base facts in a single matrix, whose rows are entity pairs and columns are relations. The redundant schema (each knowledge-base relation may overlap with multiple natural-language relations) enables knowledge-base population via matrix completion techniques. predict facts for entity pairs that were not observed in the original matrix; this is equivalent to extracting seen relation types with unseen entities (see ). and use inference rules to predict hidden knowledge-base relations from observed natural-language relations. This setting is akin to generalizing across different manifestations of the same relation (see ) since a natural-language description of each target relation appears in the training data. Moreover, the information about the unseen relations is a set of explicit inference rules, as opposed to implicit natural-language questions. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1852412531"
],
"abstract": [
"© 2013 Association for Computational Linguistics. Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a universal schema: the union of all involved schemas (surface form predicates as in OpenIE, and relations in the schemas of preexisting databases). This schema has an almost unlimited set of relations (due to surface forms), and supports integration with existing structured data (through the relation types of existing databases). To populate a database of such schema we present matrix factorization models that learn latent feature vectors for entity tuples and relations. We show that such latent models achieve substantially higher accuracy than a traditional classification approach. More importantly, by operating simultaneously on relations observed in text and in pre-existing structured DBs such as Freebase, we are able to reason about unstructured and structured data in mutually-supporting ways. By doing so our approach outperforms stateof- the-Art distant supervision."
]
} |
1706.04146 | 2762072470 | The evolution of mobile malware poses a serious threat to smartphone security. Today, sophisticated attackers can adapt by maximally sabotaging machine-learning classifiers via polluting training data, rendering most recent machine learning-based malware detection tools (such as Drebin, DroidAPIMiner, and MaMaDroid) ineffective. In this paper, we explore the feasibility of constructing crafted malware samples; examine how machine-learning classifiers can be misled under three different threat models; then conclude that injecting carefully crafted data into training data can significantly reduce detection accuracy. To tackle the problem, we propose KuafuDet, a two-phase learning enhancing approach that learns mobile malware by adversarial detection. KuafuDet includes an offline training phase that selects and extracts features from the training set, and an online detection phase that utilizes the classifier trained by the first phase. To further address the adversarial environment, these two phases are intertwined through a self-adaptive learning scheme, wherein an automated camouflage detector is introduced to filter the suspicious false negatives and feed them back into the training phase. We finally show that KuafuDet can significantly reduce false negatives and boost the detection accuracy by at least 15 . Experiments on more than 250,000 mobile applications demonstrate that KuafuDet is scalable and can be highly effective as a standalone system. | Most recently, Chen @cite_0 suggest the use of semantic features of mobile apps to retain classifier value over time, building on the intuition that certain semantic attributes of mobile malware are invariant. Experiments verify that the incorporation of semantic features can significantly improve the performance of Android malware classification. Deo @cite_1 propose to assess the quality of binary classification by using probabilistic predictors. Although they both consider retraining, adversarial environment is missing. | {
"cite_N": [
"@cite_0",
"@cite_1"
],
"mid": [
"2473365483",
"2534602447"
],
"abstract": [
"Automatic malware classifiers often perform badly on the detection of new malware, i.e., their robustness is poor. We study the machine-learning-based mobile malware classifiers and reveal one reason: the input features used by these classifiers can't capture general behavioural patterns of malware instances. We extract the best-performing syntax-based features like permissions and API calls, and some semantics-based features like happen-befores and unwanted behaviours, and train classifiers using popular supervised and semi-supervised learning methods. By comparing their classification performance on industrial datasets collected across several years, we demonstrate that using semantics-based features can dramatically improve robustness of malware classifiers.",
"Malware evolves perpetually and relies on increasingly so- phisticated attacks to supersede defense strategies. Data-driven approaches to malware detection run the risk of becoming rapidly antiquated. Keeping pace with malware requires models that are periodically enriched with fresh knowledge, commonly known as retraining. In this work, we propose the use of Venn-Abers predictors for assessing the quality of binary classification tasks as a first step towards identifying antiquated models. One of the key benefits behind the use of Venn-Abers predictors is that they are automatically well calibrated and offer probabilistic guidance on the identification of nonstationary populations of malware. Our framework is agnostic to the underlying classification algorithm and can then be used for building better retraining strategies in the presence of concept drift. Results obtained over a timeline-based evaluation with about 90K samples show that our framework can identify when models tend to become obsolete."
]
} |
1706.03863 | 2952749860 | Large databases are often organized by hand-labeled metadata, or criteria, which are expensive to collect. We can use unsupervised learning to model database variation, but these models are often high dimensional, complex to parameterize, or require expert knowledge. We learn low-dimensional continuous criteria via interactive ranking, so that the novice user need only describe the relative ordering of examples. This is formed as semi-supervised label propagation in which we maximize the information gained from a limited number of examples. Further, we actively suggest data points to the user to rank in a more informative way than existing work. Our efficient approach allows users to interactively organize thousands of data points along 1D and 2D continuous sliders. We experiment with datasets of imagery and geometry to demonstrate that our tool is useful for quickly assessing and organizing the content of large databases. | This related problem takes the results of context- or text-based search and refines the query and or retrieved result with user interaction. Personalized faceted search @cite_10 exploits relevant meta-data and suggests new keywords to refine the current search. User behavior is modeled probabilistically and tuned to maximize the expected of the facet. A rich literature shows the importance of re-ranking in search @cite_10 ; however, existing algorithms in this context focus on rather than organizing databases along criteria. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2105481243"
],
"abstract": [
"Faceted search is becoming a popular method to allow users to interactively search and navigate complex information spaces. A faceted search system presents users with key-value metadata that is used for query refinement. While popular in e-commerce and digital libraries, not much research has been conducted on which metadata to present to a user in order to improve the search experience. Nor are there repeatable benchmarks for evaluating a faceted search engine. This paper proposes the use of collaborative filtering and personalization to customize the search interface to each user's behavior. This paper also proposes a utility based framework to evaluate the faceted interface. In order to demonstrate these ideas and better understand personalized faceted search, several faceted search algorithms are proposed and evaluated using the novel evaluation methodology."
]
} |
1706.03863 | 2952749860 | Large databases are often organized by hand-labeled metadata, or criteria, which are expensive to collect. We can use unsupervised learning to model database variation, but these models are often high dimensional, complex to parameterize, or require expert knowledge. We learn low-dimensional continuous criteria via interactive ranking, so that the novice user need only describe the relative ordering of examples. This is formed as semi-supervised label propagation in which we maximize the information gained from a limited number of examples. Further, we actively suggest data points to the user to rank in a more informative way than existing work. Our efficient approach allows users to interactively organize thousands of data points along 1D and 2D continuous sliders. We experiment with datasets of imagery and geometry to demonstrate that our tool is useful for quickly assessing and organizing the content of large databases. | Jain and Varma @cite_12 assume click behaviour relates to the interest query, and use a click count model to predict relevant rankings. Zha al @cite_20 proposed a similar approach for visual query suggestion. The COPE system @cite_11 interactively refines search queries by users stating whether the results match their information need, which then weights image features for future searches. These approaches re-rank imagery based on user confirmation, and so they typically focus on the discrete problem of whether the retrieved results is a match @cite_10 . Our goal is to learn criteria outright from sparse user labels and a high-dimensional model. | {
"cite_N": [
"@cite_20",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"1966043174",
"2105481243",
"2137675680",
"76668060"
],
"abstract": [
"Query suggestion is an effective approach to bridge the Intention Gap between the users' search intents and queries. Most existing search engines are able to automatically suggest a list of textual query terms based on users' current query input, which can be called Textual Query Suggestion. This article proposes a new query suggestion scheme named Visual Query Suggestion (VQS) which is dedicated to image search. VQS provides a more effective query interface to help users to precisely express their search intents by joint text and image suggestions. When a user submits a textual query, VQS first provides a list of suggestions, each containing a keyword and a collection of representative images in a dropdown menu. Once the user selects one of the suggestions, the corresponding keyword will be added to complement the initial query as the new textual query, while the image collection will be used as the visual query to further represent the search intent. VQS then performs image search based on the new textual query using text search techniques, as well as content-based visual retrieval to refine the search results by using the corresponding images as query examples. We compare VQS against three popular image search engines, and show that VQS outperforms these engines in terms of both the quality of query suggestion and the search performance.",
"Faceted search is becoming a popular method to allow users to interactively search and navigate complex information spaces. A faceted search system presents users with key-value metadata that is used for query refinement. While popular in e-commerce and digital libraries, not much research has been conducted on which metadata to present to a user in order to improve the search experience. Nor are there repeatable benchmarks for evaluating a faceted search engine. This paper proposes the use of collaborative filtering and personalization to customize the search interface to each user's behavior. This paper also proposes a utility based framework to evaluate the faceted interface. In order to demonstrate these ideas and better understand personalized faceted search, several faceted search algorithms are proposed and evaluated using the novel evaluation methodology.",
"Our objective is to improve the performance of keyword based image search engines by re-ranking their original results. To this end, we address three limitations of existing search engines in this paper. First, there is no straight-forward, fully automated way of going from textual queries to visual features. Image search engines therefore primarily rely on static and textual features for ranking. Visual features are mainly used for secondary tasks such as finding similar images. Second, image rankers are trained on query-image pairs labeled with relevance judgments determined by human experts. Such labels are well known to be noisy due to various factors including ambiguous queries, unknown user intent and subjectivity in human judgments. This leads to learning a sub-optimal ranker. Finally, a static ranker is typically built to handle disparate user queries. The ranker is therefore unable to adapt its parameters to suit the query at hand which again leads to sub-optimal results. We demonstrate that all of these problems can be mitigated by employing a re-ranking algorithm that leverages aggregate user click data. We hypothesize that images clicked in response to a query are mostly relevant to the query. We therefore re-rank the original search results so as to promote images that are likely to be clicked to the top of the ranked list. Our re-ranking algorithm employs Gaussian Process regression to predict the normalized click count for each image, and combines it with the original ranking score. Our approach is shown to significantly boost the performance of the Bing image search engine on a wide range of tail queries.",
"Most multimedia retrieval services e.g. YouTube, Flickr, Google etc. rely on users searching using textual queries or examples. However, this solution is inadequate when there is no text, very little text, the text is in a foreign language or the user cannot form textual a query. In order to overcome these shortcomings we have developed an image retrieval system called COPE (COnversational Picture Exploration) that can use a number of different preference feedback mechanisms, inspired by conversational recommendation paradigms, for image retrieval. In COPE users are presented with a small number of search results and simply have to express whether these results match their information need. We examine the suitability of a number of feedback approaches for semiautomatic and interactive image retrieval. For interactive retrieval we compared our preference based approaches to text based search (where we consider text to be an upper bound), our results indicate that users prefer preference based search to text based search and in some cases our approaches can outperform text based search."
]
} |
1706.03863 | 2952749860 | Large databases are often organized by hand-labeled metadata, or criteria, which are expensive to collect. We can use unsupervised learning to model database variation, but these models are often high dimensional, complex to parameterize, or require expert knowledge. We learn low-dimensional continuous criteria via interactive ranking, so that the novice user need only describe the relative ordering of examples. This is formed as semi-supervised label propagation in which we maximize the information gained from a limited number of examples. Further, we actively suggest data points to the user to rank in a more informative way than existing work. Our efficient approach allows users to interactively organize thousands of data points along 1D and 2D continuous sliders. We experiment with datasets of imagery and geometry to demonstrate that our tool is useful for quickly assessing and organizing the content of large databases. | For data retrieval, Parikh and Grauman @cite_26 learn discrete ranking functions via RankSVM from existing user labels. This restricts exploration to known criteria, whereas we discover criteria interactively. Murray al @cite_3 presented a database for visual analysis that is characterized by abstract aesthetic' features, while Caicedo al @cite_18 exploited user preference for image enhancement. Reinert al @cite_16 use interaction to visually arrange a small image database into an aesthetic overview, which is orthogonal to the efficient exploration that we pursue. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_3",
"@cite_16"
],
"mid": [
"2139842681",
"",
"2078807908",
"2049431120"
],
"abstract": [
"This paper presents methods for personalization of image enhancement, which could be deployed in photo editing software and also in cloud-based image sharing services. We observe that users do have different preferences for enhancing images and that there are groups of people that share similarities in preferences. Our goal is to predict enhancements for novel images belonging to a particular user based on her specific taste, to facilitate the retouching process on large image collections. To that end, we describe an enhancement framework that can learn user preferences in an individual or collaborative way. The proposed system is based on a novel interactive application that allows to collect user's enhancement preferences. We propose algorithms to predict personalized enhancements by learning a preference model from the provided information. Furthermore, the algorithm improves prediction performance as more enhancement examples are progressively added. We conducted experiments via Amazon Mechanical Turk to collect preferences from a large group of people. Results show that the proposed framework can suggest image enhancements more targeted to individual users than commercial tools with global auto-enhancement functionalities.",
"",
"With the ever-expanding volume of visual content available, the ability to organize and navigate such content by aesthetic preference is becoming increasingly important. While still in its nascent stage, research into computational models of aesthetic preference already shows great potential. However, to advance research, realistic, diverse and challenging databases are needed. To this end, we introduce a new large-scale database for conducting Aesthetic Visual Analysis: AVA. It contains over 250,000 images along with a rich variety of meta-data including a large number of aesthetic scores for each image, semantic labels for over 60 categories as well as labels related to photographic style. We show the advantages of AVA with respect to existing databases in terms of scale, diversity, and heterogeneity of annotations. We then describe several key insights into aesthetic preference afforded by AVA. Finally, we demonstrate, through three applications, how the large scale of AVA can be leveraged to improve performance on existing preference tasks.",
"We propose an approach to \"pack\" a set of two-dimensional graphical primitives into a spatial layout that follows artistic goals. We formalize this process as projecting from a high-dimensional feature space into a 2D layout. Our system does not expose the control of this projection to the user in form of sliders or similar interfaces. Instead, we infer the desired layout of all primitives from interactive placement of a small subset of example primitives. To produce a pleasant distribution of primitives with spatial extend, we propose a novel generalization of Centroidal Voronoi Tesselation which equalizes the distances between boundaries of nearby primitives. Compared to previous primitive distribution approaches our GPU implementation achieves both better fidelity and asymptotically higher speed. A user study evaluates the system's usability."
]
} |
1706.03863 | 2952749860 | Large databases are often organized by hand-labeled metadata, or criteria, which are expensive to collect. We can use unsupervised learning to model database variation, but these models are often high dimensional, complex to parameterize, or require expert knowledge. We learn low-dimensional continuous criteria via interactive ranking, so that the novice user need only describe the relative ordering of examples. This is formed as semi-supervised label propagation in which we maximize the information gained from a limited number of examples. Further, we actively suggest data points to the user to rank in a more informative way than existing work. Our efficient approach allows users to interactively organize thousands of data points along 1D and 2D continuous sliders. We experiment with datasets of imagery and geometry to demonstrate that our tool is useful for quickly assessing and organizing the content of large databases. | Our interactive criteria definition on high-dimensional data does not compare directly to existing supervised criteria learning systems. CueFlick @cite_1 @cite_4 learns on binary labels, and WhittleSearch @cite_26 attempts to re-rank data along existing criteria rather than generate criteria from scratch. We improve upon WhittleSearch's underlying RankSVM techniques when adapted to our scenario (Sec. ). | {
"cite_N": [
"@cite_26",
"@cite_1",
"@cite_4"
],
"mid": [
"",
"2124332348",
"2119770538"
],
"abstract": [
"",
"Web image search is difficult in part because a handful of keywords are generally insufficient for characterizing the visual properties of an image. Popular engines have begun to provide tags based on simple characteristics of images (such as tags for black and white images or images that contain a face), but such approaches are limited by the fact that it is unclear what tags end users want to be able to use in examining Web image search results. This paper presents CueFlik, a Web image search application that allows end users to quickly create their own rules for re ranking images based on their visual characteristics. End users can then re rank any future Web image search results according to their rule. In an experiment we present in this paper, end users quickly create effective rules for such concepts as \"product photos\", \"portraits of people\", and \"clipart\". When asked to conceive of and create their own rules, participants create such rules as \"sports action shot\" with images from queries for \"basketball\" and \"football\". CueFlik represents both a promising new approach to Web image search and an important study in end user interactive machine learning.",
"End-user interactive machine learning is a promising tool for enhancing human productivity and capabilities with large unstructured data sets. Recent work has shown that we can create end-user interactive machine learning systems for specific applications. However, we still lack a generalized understanding of how to design effective end-user interaction with interactive machine learning systems. This work presents three explorations in designing for effective end-user interaction with machine learning in CueFlik, a system developed to support Web image search. These explorations demonstrate that interactions designed to balance the needs of end-users and machine learning algorithms can significantly improve the effectiveness of end-user interactive machine learning."
]
} |
1706.03863 | 2952749860 | Large databases are often organized by hand-labeled metadata, or criteria, which are expensive to collect. We can use unsupervised learning to model database variation, but these models are often high dimensional, complex to parameterize, or require expert knowledge. We learn low-dimensional continuous criteria via interactive ranking, so that the novice user need only describe the relative ordering of examples. This is formed as semi-supervised label propagation in which we maximize the information gained from a limited number of examples. Further, we actively suggest data points to the user to rank in a more informative way than existing work. Our efficient approach allows users to interactively organize thousands of data points along 1D and 2D continuous sliders. We experiment with datasets of imagery and geometry to demonstrate that our tool is useful for quickly assessing and organizing the content of large databases. | Chen al @cite_23 apply active learning to remove inconsistency from existing crowdsourced labels. Their non-interactive approach is approximate in information gain, while our interactive approach is exact given model assumptions. Shen and Lin @cite_2 essentially use RankSVM for bipartite ranking, with active learning based on single point and pair closeness. This is very similar to baseline predictive variance, which may accidentally pick uninformative outliers. Our new measure is unbiased by outliers. Our active learning approach is complementary to human-in-the-loop active learning approaches, , Branson al @cite_9 . Their task is to select for a given data point which minimize class conditional distribution uncertainty ( , 20 questions game). Our problem is to suggest to label. Fogarty al @cite_1 learn image retrieval criteria from binary labels, , outdoor vs. indoor, by iterative refinement of distance measures between data points. Their active label suggestion was extended by Amershi al @cite_4 by adopting a Gaussian process model on distance measures. We ask users to provide rank labels, and increase performance over Amershi al (Sec. ). | {
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_1",
"@cite_23",
"@cite_2"
],
"mid": [
"2119770538",
"",
"2124332348",
"2164545125",
"2118363113"
],
"abstract": [
"End-user interactive machine learning is a promising tool for enhancing human productivity and capabilities with large unstructured data sets. Recent work has shown that we can create end-user interactive machine learning systems for specific applications. However, we still lack a generalized understanding of how to design effective end-user interaction with interactive machine learning systems. This work presents three explorations in designing for effective end-user interaction with machine learning in CueFlik, a system developed to support Web image search. These explorations demonstrate that interactions designed to balance the needs of end-users and machine learning algorithms can significantly improve the effectiveness of end-user interactive machine learning.",
"",
"Web image search is difficult in part because a handful of keywords are generally insufficient for characterizing the visual properties of an image. Popular engines have begun to provide tags based on simple characteristics of images (such as tags for black and white images or images that contain a face), but such approaches are limited by the fact that it is unclear what tags end users want to be able to use in examining Web image search results. This paper presents CueFlik, a Web image search application that allows end users to quickly create their own rules for re ranking images based on their visual characteristics. End users can then re rank any future Web image search results according to their rule. In an experiment we present in this paper, end users quickly create effective rules for such concepts as \"product photos\", \"portraits of people\", and \"clipart\". When asked to conceive of and create their own rules, participants create such rules as \"sports action shot\" with images from queries for \"basketball\" and \"football\". CueFlik represents both a promising new approach to Web image search and an important study in end user interactive machine learning.",
"Inferring rankings over elements of a set of objects, such as documents or images, is a key learning problem for such important applications as Web search and recommender systems. Crowdsourcing services provide an inexpensive and efficient means to acquire preferences over objects via labeling by sets of annotators. We propose a new model to predict a gold-standard ranking that hinges on combining pairwise comparisons via crowdsourcing. In contrast to traditional ranking aggregation methods, the approach learns about and folds into consideration the quality of contributions of each annotator. In addition, we minimize the cost of assessment by introducing a generalization of the traditional active learning scenario to jointly select the annotator and pair to assess while taking into account the annotator quality, the uncertainty over ordering of the pair, and the current model uncertainty. We formalize this as an active learning strategy that incorporates an exploration-exploitation tradeoff and implement it using an efficient online Bayesian updating scheme. Using simulated and real-world data, we demonstrate that the active learning strategy achieves significant reductions in labeling cost while maintaining accuracy.",
"Bipartite ranking is a fundamental ranking problem that learns to order relevant instances ahead of irrelevant ones. The pair-wise approach for bi-partite ranking construct a quadratic number of pairs to solve the problem, which is infeasible for large-scale data sets. The point-wise approach, albeit more efficient, often results in inferior performance. That is, it is difficult to conduct bipartite ranking accurately and efficiently at the same time. In this paper, we develop a novel active sampling scheme within the pair-wise approach to conduct bipartite ranking efficiently. The scheme is inspired from active learning and can reach a competitive ranking performance while focusing only on a small subset of the many pairs during training. Moreover, we propose a general Combined Ranking and Classification (CRC) framework to accurately conduct bipartite ranking. The framework unifies point-wise and pair-wise approaches and is simply based on the idea of treating each instance point as a pseudo-pair. Experiments on 14 real-word large-scale data sets demonstrate that the proposed algorithm of Active Sampling within CRC, when coupled with a linear Support Vector Machine, usually outperforms state-of-the-art point-wise and pair-wise ranking approaches in terms of both accuracy and efficiency."
]
} |
1706.03863 | 2952749860 | Large databases are often organized by hand-labeled metadata, or criteria, which are expensive to collect. We can use unsupervised learning to model database variation, but these models are often high dimensional, complex to parameterize, or require expert knowledge. We learn low-dimensional continuous criteria via interactive ranking, so that the novice user need only describe the relative ordering of examples. This is formed as semi-supervised label propagation in which we maximize the information gained from a limited number of examples. Further, we actively suggest data points to the user to rank in a more informative way than existing work. Our efficient approach allows users to interactively organize thousands of data points along 1D and 2D continuous sliders. We experiment with datasets of imagery and geometry to demonstrate that our tool is useful for quickly assessing and organizing the content of large databases. | Our approach can be interpreted as using interaction to embed data into a high-level concept space. Existing work in this area focuses on category- or cluster-level supervision. Wilber al @cite_30 receive triplet constants from users ( @math : object @math should be closer to object @math than it is to @math ) to learn pair-wise similarity kernels that are used in @math -SNE-type embedding @cite_21 . We ask users to provide rank labels and emphasize continuous parameterization. | {
"cite_N": [
"@cite_30",
"@cite_21"
],
"mid": [
"2951944936",
"2187089797"
],
"abstract": [
"This paper presents our work on \"SNaCK,\" a low-dimensional concept embedding algorithm that combines human expertise with automatic machine similarity kernels. Both parts are complimentary: human insight can capture relationships that are not apparent from the object's visual similarity and the machine can help relieve the human from having to exhaustively specify many constraints. We show that our SNaCK embeddings are useful in several tasks: distinguishing prime and nonprime numbers on MNIST, discovering labeling mistakes in the Caltech UCSD Birds (CUB) dataset with the help of deep-learned features, creating training datasets for bird classifiers, capturing subjective human taste on a new dataset of 10,000 foods, and qualitatively exploring an unstructured set of pictographic characters. Comparisons with the state-of-the-art in these tasks show that SNaCK produces better concept embeddings that require less human supervision than the leading methods.",
"We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets."
]
} |
1706.03946 | 2626796053 | Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. In this paper, we introduce a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. We develop models on the dataset making use of both neural sentence encoding and traditionally used summarisation features and show that models which encode sentences as well as their local and global context perform best, significantly outperforming well-established baseline methods. | Datasets for extractive summarisation often emerged as part of evaluation campaigns for summarisation of news, organised by the Document Understanding Conference (DUC), and the Text Analysis Conference (TAC). DUC proposed single-document summarisation @cite_15 , whereas TAC datasets are for multi-document summarisation @cite_16 @cite_27 . All of the datasets contain roughly 500 documents. | {
"cite_N": [
"@cite_27",
"@cite_15",
"@cite_16"
],
"mid": [
"2571932860",
"1598682430",
"2182572585"
],
"abstract": [
"",
"There has been a long history of research in text summarization by both the text retrieval and the natural language processing communities, but evaluation of this research has always presented problems. In 2001 NIST launched a new text summarization evaluation effort, guided by a roadmap from the research community and sponsored by the DARPA TIDES project. This paper is a report of the first formal evaluation in a new conference called the Document Understanding Conference (DUC).",
"The summarization track at the Text Analysis Conference (TAC) is a direct continuation of the Document Understanding Conference (DUC) series of workshops, focused on providing common data and evaluation framework for research in automatic summarization. In the TAC 2008 summarization track, the main task was to produce two 100-word summaries from two related sets of 10 documents, where the second summary was an update summary. While all of the 71 submitted runs were automatically scored with the ROUGE and BE metrics, NIST assessors manually evaluated only 57 of the submitted runs for readability, content, and overall responsiveness."
]
} |
1706.03946 | 2626796053 | Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. In this paper, we introduce a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. We develop models on the dataset making use of both neural sentence encoding and traditionally used summarisation features and show that models which encode sentences as well as their local and global context perform best, significantly outperforming well-established baseline methods. | The largest summarisation dataset (1 million documents) to date is the DailyMail CNN dataset @cite_0 , first used for single-document abstractive summarisation by @cite_23 , enabling research on data-intensive sequence encoding methods. | {
"cite_N": [
"@cite_0",
"@cite_23"
],
"mid": [
"2949615363",
"2963929190"
],
"abstract": [
"Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.",
"In this work, we model abstractive text summarization using Attentional EncoderDecoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-toword structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research."
]
} |
1706.03946 | 2626796053 | Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. In this paper, we introduce a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. We develop models on the dataset making use of both neural sentence encoding and traditionally used summarisation features and show that models which encode sentences as well as their local and global context perform best, significantly outperforming well-established baseline methods. | Early work on extractive summarisation focuses exclusively on easy to compute statistics, e.g. word frequency @cite_30 , location in the document @cite_28 , and TF-IDF @cite_13 . Supervised learning methods which classify sentences in a document binarily as summary sentences or not soon became popular @cite_26 . Exploration of more cues such as sentence position @cite_17 , sentence length @cite_29 , words in the title, presence of proper nouns, word frequency @cite_4 and event cues @cite_2 followed. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_4",
"@cite_28",
"@cite_29",
"@cite_2",
"@cite_13",
"@cite_17"
],
"mid": [
"1974339500",
"2101390659",
"1975579663",
"2092246763",
"1602831581",
"1620608722",
"1551225669",
"2591784896"
],
"abstract": [
"Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means. In the exploratory research described, the complete text of an article in machine-readable form is scanned by an IBM 704 data-processing machine and analyzed in accordance with a standard program. Statistical information derived from word frequency and distribution is used by the machine to compute a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the \"auto-abstract.\"",
"●● ● ● ● To summarize is to reduce in complexity, and hence in length, while retaining some of the essential qualities of the original. This paper focusses on document extracts, a particular kind of computed document summary. Document extracts consisting of roughly 20 of the original cart be as informative as the full text of a document, which suggests that even shorter extracts may be useful indicative summmies. The trends in our results are in agreement with those of Edmundson who used a subjectively weighted combination of features as opposed to training the feature weights using a corpus.",
"We present an exploration of generative probabilistic models for multi-document summarization. Beginning with a simple word frequency based model (Nenkova and Vanderwende, 2005), we construct a sequence of models each injecting more structure into the representation of document set content and exhibiting ROUGE gains along the way. Our final model, HierSum, utilizes a hierarchical LDA-style model (, 2004) to represent content specificity as a hierarchy of topic vocabulary distributions. At the task of producing generic DUC-style summaries, HierSum yields state-of-the-art ROUGE performance and in pairwise user evaluation strongly outperforms (2007)'s state-of-the-art discriminative system. We also explore HierSum's capacity to produce multiple 'topical summaries' in order to facilitate content discovery and navigation.",
"Machine techniques for reducing technical documents to their essential discriminating indices are investigated. Human scanning patterns in selecting \"topic sentences\" and phrases composed of nouns and modifiers were simulated by computer program. The amount of condensation resulting from each method and the relative uniformity in indices are examined. It is shown that the coordinated index provided by the phrase is the more meaningful and discriminating.",
"Abstract This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection.",
"",
"Vast amounts of text material are now available in machine-readable form for automatic processing. Here, approaches are outlined for manipulating and accessing texts in arbitrary subject areas in accordance with user needs. In particular, methods are given for determining text themes, traversing texts selectively, and extracting summary statements that reflect text content.",
""
]
} |
1706.03946 | 2626796053 | Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. In this paper, we introduce a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. We develop models on the dataset making use of both neural sentence encoding and traditionally used summarisation features and show that models which encode sentences as well as their local and global context perform best, significantly outperforming well-established baseline methods. | Recent approaches to extractive summarisation have mostly focused on neural approaches, based on bag of word embeddings approaches @cite_12 @cite_18 or encoding whole documents with CNNs and or RNNs @cite_8 . | {
"cite_N": [
"@cite_18",
"@cite_12",
"@cite_8"
],
"mid": [
"2250483006",
"2250361277",
"2307381258"
],
"abstract": [
"The most successful approaches to extractive text summarization seek to maximize bigram coverage subject to a budget constraint. In this work, we propose instead to maximize semantic volume. We embed each sentence in a semantic space and construct a summary by choosing a subset of sentences whose convex hull maximizes volume in that space. We provide a greedy algorithm based on the GramSchmidt process to efficiently perform volume maximization. Our method outperforms the state-of-the-art summarization approaches on benchmark datasets.",
"In this study, we consider a summarization method using the document level similarity based on embeddings, or distributed representations of words, where we assume that an embedding of each word can represent its “meaning.” We formalize our task as the problem of maximizing a submodular function defined by the negative summation of the nearest neighbors’ distances on embedding distributions, each of which represents a set of word embeddings in a document. We proved the submodularity of our objective function and that our problem is asymptotically related to the KL-divergence between the probability density functions that correspond to a document and its summary in a continuous space. An experiment using a real dataset demonstrated that our method performed better than the existing method based on sentence-level similarity.",
"Traditional approaches to extractive summarization rely heavily on humanengineered features. In this work we propose a data-driven approach based on neural networks and continuous sentence features. We develop a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor. This architecture allows us to develop different classes of summarization models which can extract sentences or words. We train our models on large scale corpora containing hundreds of thousands of document-summary pairs 1 . Experimental results on two summarization datasets demonstrate that our models obtain results comparable to the state of the art without any access to linguistic annotation."
]
} |
1706.03741 | 2626804490 | For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback. | Our algorithm follows the same basic approach as @cite_1 and @cite_0 . They consider continuous domains with four degrees of freedom and small discrete domains, where they can assume that the reward is linear in the expectations of hand-coded features. We instead consider physics tasks with dozens of degrees of freedom and Atari tasks with no hand-engineered features; the complexity of our environments force us to use different RL algorithms and reward models, and to cope with different algorithmic tradeoffs. One notable difference is that @cite_1 and @cite_0 elicit preferences over whole trajectories rather than short clips. So although we gather about two orders of magnitude more comparisons, our experiments require less than one order of magnitude more human time. Other differences focus on changing our training procedure to cope with the nonlinear reward models and modern deep RL, for example using asynchronous training and ensembling. | {
"cite_N": [
"@cite_0",
"@cite_1"
],
"mid": [
"2050985708",
"2154023516"
],
"abstract": [
"A mathematical model is developed in an attempt to relate errors in multiple stimulus-response situations to psychological inter-stimulus and inter response distances. The fundamental assumptions are (a) that the stimulus and response confusions go on independently of each other, (b) that the probability of a stimulus confusion is an exponential decay function of the psychological distance between the stimuli, and (c) that the probability of a response confusion is an exponential decay function of the psychological distance between the responses. The problem of the operational definition of psychological distance is considered in some detail.",
"This paper makes a first step toward the integration of two subfields of machine learning, namely preference learning and reinforcement learning (RL). An important motivation for a preference-based approach to reinforcement learning is the observation that in many real-world domains, numerical feedback signals are not readily available, or are defined arbitrarily in order to satisfy the needs of conventional RL algorithms. Instead, we propose an alternative framework for reinforcement learning, in which qualitative reward signals can be directly used by the learner. The framework may be viewed as a generalization of the conventional RL framework in which only a partial order between policies is required instead of the total order induced by their respective expected long-term reward. Therefore, building on novel methods for preference learning, our general goal is to equip the RL agent with qualitative policy models, such as ranking functions that allow for sorting its available actions from most to least promising, as well as algorithms for learning such models from qualitative feedback. As a proof of concept, we realize a first simple instantiation of this framework that defines preferences based on utilities observed for trajectories. To that end, we build on an existing method for approximate policy iteration based on roll-outs. While this approach is based on the use of classification methods for generalization and policy learning, we make use of a specific type of preference learning method called label ranking. Advantages of preference-based approximate policy iteration are illustrated by means of two case studies."
]
} |
1706.03864 | 2627090667 | Cameras are the defacto sensor. The growing demand for real-time and low-power computer vision, coupled with trends towards high-efficiency heterogeneous systems, has given rise to a wide range of image processing acceleration techniques at the camera node and in the cloud. In this paper, we characterize two novel camera systems that employ acceleration techniques to push the extremes of energy and performance scaling, and explore the computation-communication tradeoffs in their designs. The first case study is a camera system designed to detect and authenticate individual faces, running solely on energy harvested from RFID readers. We design a multi-accelerator SoC design operating in the sub-mW range, and evaluate it with real-world workloads to show performance and energy efficiency improvements over a general purpose microprocessor. The second camera system is a 16-camera rig processing over 32 Gb s of data to produce real-time 3D-360 degree virtual reality video. We design a multi-FPGA processing pipeline that outperforms CPU and GPU configurations by up to 10 @math in the computation time, producing panoramic stereo video directly from the camera rig at 30 frames per second. We find that an early data reduction step, either before complex processing or offloading, is the most critical optimization for in-camera systems. | In-camera processing is not new, and prior work has introduced many in-camera processors @cite_17 . Our analysis applies in-camera processing pipelines to two highly-constrained applications: low-power face authentication and real-time VR video streaming. In this section, we discuss our general approach to analyzing image processing pipelines and review notable related work in computation offloading, image processing hardware, and similar accelerator designs. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2071879227"
],
"abstract": [
"Myriad 2 is a multicore, always-on system on chip that supports computational imaging and visual awareness for mobile, wearable, and embedded applications. The vision processing unit incorporates parallelism, instruction set architecture, and microarchitectural features to provide highly sustainable performance efficiency across a range of computationalImaging and computer vision applications, including those with low latency requirements on the order of milliseconds."
]
} |
1706.03864 | 2627090667 | Cameras are the defacto sensor. The growing demand for real-time and low-power computer vision, coupled with trends towards high-efficiency heterogeneous systems, has given rise to a wide range of image processing acceleration techniques at the camera node and in the cloud. In this paper, we characterize two novel camera systems that employ acceleration techniques to push the extremes of energy and performance scaling, and explore the computation-communication tradeoffs in their designs. The first case study is a camera system designed to detect and authenticate individual faces, running solely on energy harvested from RFID readers. We design a multi-accelerator SoC design operating in the sub-mW range, and evaluate it with real-world workloads to show performance and energy efficiency improvements over a general purpose microprocessor. The second camera system is a 16-camera rig processing over 32 Gb s of data to produce real-time 3D-360 degree virtual reality video. We design a multi-FPGA processing pipeline that outperforms CPU and GPU configurations by up to 10 @math in the computation time, producing panoramic stereo video directly from the camera rig at 30 frames per second. We find that an early data reduction step, either before complex processing or offloading, is the most critical optimization for in-camera systems. | In-camera processing pipelines can include essential to the application, and , which may not directly affect results but can improve efficiency by filtering or pre-processing data. One optional block is the motion detection block we use in our face authentication pipeline. While the core block of the pipeline, face authentication, operates on every input frame, an optional motion detection block can reduce the bandwidth and ensuing power consumption of core blocks. Offloading image processing computation from mobile devices to the cloud is well-explored in mobile systems @cite_26 . The opposing case for onloading'' computation, or keeping computation at the sensor, has grown more popular due to increased image processing demand and privacy concerns @cite_23 @cite_27 . Our approach explores the tradeoff space between offload and onload for two constrained camera systems. | {
"cite_N": [
"@cite_27",
"@cite_26",
"@cite_23"
],
"mid": [
"",
"2159694746",
"104534290"
],
"abstract": [
"",
"This article discusses the challenges in computer systems research posed by the emerging field of pervasive computing. It first examines the relationship of this new field to its predecessors: distributed systems and mobile computing. It then identifies four new research thrusts: effective use of smart spaces, invisibility, localized scalability, and masking uneven conditioning. Next, it sketches a couple of hypothetical pervasive computing scenarios, and uses them to identify key capabilities missing from today's systems. The article closes with a discussion of the research necessary to develop these capabilities.",
"Much has been said recently on off-loading computations from the phone. In particular, workloads such as speech and visual recognition that involve models based on \"big data\" are thought to be prime candidates for cloud processing. We posit that the next few years will see the arrival of mobile usages that require continuous processing of audio and video data from wearable devices. We argue that these usages are unlikely to flourish unless substantial computation is moved back on to the phone. We outline possible solutions to the problems inherent in such a move. We advocate a close partnership between perception and systems researchers to realize these usages."
]
} |
1706.03864 | 2627090667 | Cameras are the defacto sensor. The growing demand for real-time and low-power computer vision, coupled with trends towards high-efficiency heterogeneous systems, has given rise to a wide range of image processing acceleration techniques at the camera node and in the cloud. In this paper, we characterize two novel camera systems that employ acceleration techniques to push the extremes of energy and performance scaling, and explore the computation-communication tradeoffs in their designs. The first case study is a camera system designed to detect and authenticate individual faces, running solely on energy harvested from RFID readers. We design a multi-accelerator SoC design operating in the sub-mW range, and evaluate it with real-world workloads to show performance and energy efficiency improvements over a general purpose microprocessor. The second camera system is a 16-camera rig processing over 32 Gb s of data to produce real-time 3D-360 degree virtual reality video. We design a multi-FPGA processing pipeline that outperforms CPU and GPU configurations by up to 10 @math in the computation time, producing panoramic stereo video directly from the camera rig at 30 frames per second. We find that an early data reduction step, either before complex processing or offloading, is the most critical optimization for in-camera systems. | We investigate the use of a face detection accelerator as an optional block to filter data in a face authentication pipeline. Hardware acceleration for the Viola-Jones face detection algorithm has been well-explored for FPGAs and GPUs @cite_18 @cite_13 @cite_25 . While also present a neural network design using Haar filters as a first step, our work performs a more holistic characterization to optimize the full camera pipeline @cite_7 . | {
"cite_N": [
"@cite_18",
"@cite_13",
"@cite_25",
"@cite_7"
],
"mid": [
"2109609085",
"",
"2105827278",
"1782590233"
],
"abstract": [
"Robust and rapid object detection is one of the great challenges in the field of computer vision. This paper proposes a hardware architecture suitable for object detection by Viola and Jones based on an AdaBoost learning algorithm with Haar-like features as weak classifiers. Our architecture realizes rapid and robust detection with two major features: hybrid parallel execution and an image scaling method. The first exploits the cascade structure of classifiers, in which classifiers located near the beginning of the cascade are used more frequently than subsequent classifiers. We assign more resources to the former classifiers to execute in parallel than subsequent classifiers. This dramatically improves the total processing speed without a great increase in circuit area. The second feature is a method of scaling input images instead of scaling classifiers. This increases the efficiency of hardware implementation while retaining a high detection rate. In addition we implement the proposed architecture on a Virtex-5 FPGA to show that it achieves real-time object detection at 30 frames per second on VGA video.",
"",
"Face detection is an important aspect for biometrics, video surveillance and human computer interaction. We present a multi-GPU implementation of the Viola-Jones face detection algorithm that meets the performance of the fastest known FPGA implementation. The GPU design offers far lower development costs, but the FPGA implementation consumes less power. We discuss the performance programming required to realize our design, and describe future research directions.",
"Most face databases have been created under controlled conditions to facilitate the study of specific parameters on the face recognition problem. These parameters include such variables as position, pose, lighting, background, camera quality, and gender. While there are many applications for face recognition technology in which one can control the parameters of image acquisition, there are also many applications in which the practitioner has little or no control over such parameters. This database, Labeled Faces in the Wild, is provided as an aid in studying the latter, unconstrained, recognition problem. The database contains labeled face photographs spanning the range of conditions typically encountered in everyday life. The database exhibits “natural” variability in factors such as pose, lighting, race, accessories, occlusions, and background. In addition to describing the details of the database, we provide specific experimental paradigms for which the database is suitable. This is done in an effort to make research performed with the database as consistent and comparable as possible. We provide baseline results, including results of a state of the art face recognition system combined with a face alignment system. To facilitate experimentation on the database, we provide several parallel databases, including an aligned version."
]
} |
1706.03864 | 2627090667 | Cameras are the defacto sensor. The growing demand for real-time and low-power computer vision, coupled with trends towards high-efficiency heterogeneous systems, has given rise to a wide range of image processing acceleration techniques at the camera node and in the cloud. In this paper, we characterize two novel camera systems that employ acceleration techniques to push the extremes of energy and performance scaling, and explore the computation-communication tradeoffs in their designs. The first case study is a camera system designed to detect and authenticate individual faces, running solely on energy harvested from RFID readers. We design a multi-accelerator SoC design operating in the sub-mW range, and evaluate it with real-world workloads to show performance and energy efficiency improvements over a general purpose microprocessor. The second camera system is a 16-camera rig processing over 32 Gb s of data to produce real-time 3D-360 degree virtual reality video. We design a multi-FPGA processing pipeline that outperforms CPU and GPU configurations by up to 10 @math in the computation time, producing panoramic stereo video directly from the camera rig at 30 frames per second. We find that an early data reduction step, either before complex processing or offloading, is the most critical optimization for in-camera systems. | NNs have been studied extensively for accomplishing face detection and recognition @cite_29 @cite_16 @cite_10 . Researchers are actively working to improve NN performance with custom hardware @cite_30 @cite_11 . ShiDianNao @cite_1 , specifically, is a CNN accelerator executed in-camera, where the accelerator is placed on the same chip as the image sensor processor, achieving 320mW power consumption. | {
"cite_N": [
"@cite_30",
"@cite_29",
"@cite_1",
"@cite_16",
"@cite_10",
"@cite_11"
],
"mid": [
"1968422655",
"",
"2067523571",
"2120284346",
"2145287260",
""
],
"abstract": [
"In this paper we present a scalable hardware architecture to implement large-scale convolutional neural networks and state-of-the-art multi-layered artificial vision systems. This system is fully digital and is a modular vision engine with the goal of performing real-time detection, recognition and segmentation of mega-pixel images. We present a performance comparison between a software, FPGA and ASIC implementation that shows a speed up in custom hardware implementations.",
"",
"In recent years, neural network accelerators have been shown to achieve both high energy efficiency and high performance for a broad application scope within the important category of recognition and mining applications. Still, both the energy efficiency and performance of such accelerators remain limited by memory accesses. In this paper, we focus on image applications, arguably the most important category among recognition and mining applications. The neural networks which are state-of-the-art for these applications are Convolutional Neural Networks (CNN), and they have an important property: weights are shared among many neurons, considerably reducing the neural network memory footprint. This property allows to entirely map a CNN within an SRAM, eliminating all DRAM accesses for weights. By further hoisting this accelerator next to the image sensor, it is possible to eliminate all remaining DRAM accesses, i.e., for inputs and outputs. In this paper, we propose such a CNN accelerator, placed next to a CMOS or CCD sensor. The absence of DRAM accesses combined with a careful exploitation of the specific data access patterns within CNNs allows us to design an accelerator which is 60× more energy efficient than the previous state-of-the-art neural network accelerator. We present a full design down to the layout at 65 nm, with a modest footprint of 4.86mm2 and consuming only 320mW, but still about 30× faster than high-end GPUs.",
"In this paper, we present a novel face detection approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns, rotated up to spl plusmn 20 degrees in image plane and turned up to spl plusmn 60 degrees, in complex real world images. The proposed system automatically synthesizes simple problem-specific feature extractors from a training set of face and nonface patterns, without making any assumptions or using any hand-made design concerning the features to extract or the areas of the face pattern to analyze. The face detection procedure acts like a pipeline of simple convolution and subsampling modules that treat the raw input image as a whole. We therefore show that an efficient face detection system does not require any costly local preprocessing before classification of image areas. The proposed scheme provides very high detection rate with a particularly low level of false positives, demonstrated on difficult test sets, without requiring the use of multiple networks for handling difficult cases. We present extensive experimental results illustrating the efficiency of the proposed approach on difficult test sets and including an in-depth sensitivity analysis with respect to the degrees of variability of the face patterns.",
"In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.",
""
]
} |
1706.03547 | 2963599605 | In this paper we study a higher order viscous quasi-geostrophic type equation. This equation was derived in [11] as the limit dynamics of a singularly perturbed Navier-Stokes-Korteweg system with Coriolis force, when the Mach, Rossby and Weber numbers go to zero at the same rate. The scope of the present paper is twofold. First of all, we investigate well-posedness of such a model on the whole space @math : we prove that it is well-posed in @math for any @math , globally in time. Interestingly enough, we show that this equation owns two levels of energy estimates, for which one gets existence and uniqueness of weak solutions with different regularities (namely, @math and @math regularities); this fact can be viewed as a remainder of the so called BD-entropy structure of the original system. In the second part of the paper we investigate the long-time behaviour of these solutions. We show that they converge to the solution of the corresponding linear parabolic type equation, with same initial datum and external force. Our proof is based on dispersive estimates both for the solutions to the linear and non-linear problems. | Let us go further: after observing once again the particular form of the convective term, it is not hard to convince oneself that this term identically vanishes also when tested against the quantity @math . Therefore, if one multiply the equation by @math , one finds a second energy conservation, which gives propagation of @math regularity (provided the initial datum and the external force are smooth enough). This is a remarkable property of our system, which can be viewed as a remainder of the owned by the primitive system , see e.g. papers @cite_19 and @cite_8 . Furthermore, the previous cancellations in the convective term and the second-order energy conservation prompt us to look also for propagation of intermediate and higher regularities, and indeed we are able to prove existence and uniqueness of solutions at the @math level of regularity, for any @math . | {
"cite_N": [
"@cite_19",
"@cite_8"
],
"mid": [
"1997482974",
"2030587235"
],
"abstract": [
"We consider a two dimensional viscous shallow water model with friction term. Existence of global weak solutions is obtained and convergence to the strong solution of the viscous quasi-geostrophic equation with free surface term is proven in the well prepared case. The ill prepared data case is also discussed.",
"Abstract In this article, we give some mathematical results for an isothermal model of capillary compressible fluids derived by Dunn and Serrin in [1]Dunn JE, Serrin J. On the thermodynamics of interstitial working. Arch Rational Mech Anal. 1985; 88(2):95–133), which can be used as a phase transition model. We consider a periodic domain Ω = T d (d = 2 ou 3) or a strip domain Ω = (0,1) × T d −1. We look at the dependence of the viscosity μ and the capillarity coefficient κwith respect to the density ρ. Depending on the cases we consider, different results are obtained. We prove for instance for a viscosity μ(ρ) = νρ and a surface tension the global existence of weak solutions of the Korteweg system without smallness assumption on the data. This model includes a shallow water model and a lubrication model. We discuss the validity of the result for the shallow water equations since the density is less regular than in the Korteweg case."
]
} |
1706.03547 | 2963599605 | In this paper we study a higher order viscous quasi-geostrophic type equation. This equation was derived in [11] as the limit dynamics of a singularly perturbed Navier-Stokes-Korteweg system with Coriolis force, when the Mach, Rossby and Weber numbers go to zero at the same rate. The scope of the present paper is twofold. First of all, we investigate well-posedness of such a model on the whole space @math : we prove that it is well-posed in @math for any @math , globally in time. Interestingly enough, we show that this equation owns two levels of energy estimates, for which one gets existence and uniqueness of weak solutions with different regularities (namely, @math and @math regularities); this fact can be viewed as a remainder of the so called BD-entropy structure of the original system. In the second part of the paper we investigate the long-time behaviour of these solutions. We show that they converge to the solution of the corresponding linear parabolic type equation, with same initial datum and external force. Our proof is based on dispersive estimates both for the solutions to the linear and non-linear problems. | The proof of higher regularity energy estimates (namely, for @math ) relies on a paralinearization of the convective term and a special decomposition for treating it, which has already been used in @cite_9 : in particular, thanks to this approach we are able to reproduce the special cancellations in the transport operator, up to some remainders; now, a careful analysis of these remainder terms allows us to control them by the @math -energy of the solution @math , so that one can conclude by an application of the Gronwall lemma. On the contrary, propagation of intermediate regularities (namely for @math ) is surprisingly more involved, since now the special cancellations concern only the lower order item in the convective term, and no more @math , which hence needs to be controlled very carefully. This can be done by resorting once again to a paralinearization of the transport term and to delicate estimates concerninig the commutators involved in the computations; eventually, we manage to bound all the terms, and Gronwall lemma allows us to close the estimates as before. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2962949490"
],
"abstract": [
"Abstract In [18] Paicu and Zarnescu have studied an order tensor system which describes the flow of a liquid crystal. They have proven the existence of weak solutions, the propagation of higher regularity, namely H s with s > 1 and the weak-strong uniqueness in dimension two. This paper is devoted to fill the gap of their results, namely to propagate the low regularity, namely H s for 0 s 1 and to prove the uniqueness of the weak solutions. For the completeness of this research, we also propose an alternative approach in order to prove the existence of weak solutions."
]
} |
1706.03610 | 2626667877 | Factoid question answering (QA) has recently benefited from the development of deep learning (DL) systems. Neural network models outperform traditional approaches in domains where large datasets exist, such as SQuAD (ca. 100,000 questions) for Wikipedia articles. However, these systems have not yet been applied to QA in more specific domains, such as biomedicine, because datasets are generally too small to train a DL system from scratch. For example, the BioASQ dataset for biomedical QA comprises less then 900 factoid (single answer) and list (multiple answers) QA instances. In this work, we adapt a neural QA system trained on a large open-domain dataset (SQuAD, source) to a biomedical dataset (BioASQ, target) by employing various transfer learning techniques. Our network architecture is based on a state-of-the-art QA system, extended with biomedical word embeddings and a novel mechanism to answer list questions. In contrast to existing biomedical QA systems, our system does not rely on domain-specific ontologies, parsers or entity taggers, which are expensive to create. Despite this fact, our systems achieve state-of-the-art results on factoid questions and competitive results on list questions. | Neural QA systems differ from traditional approaches in that the algorithm is not subdivided into discrete steps. Instead, a single model is trained end-to-end to compute an answer directly for a given question and context. The typical architecture of such systems @cite_16 @cite_14 @cite_3 can be summarized as follows: | {
"cite_N": [
"@cite_14",
"@cite_16",
"@cite_3"
],
"mid": [
"2552027021",
"2516930406",
"2951815760"
],
"abstract": [
"Several deep learning models have been proposed for question answering. However, due to their single-pass nature, they have no way to recover from local maxima corresponding to incorrect answers. To address this problem, we introduce the Dynamic Coattention Network (DCN) for question answering. The DCN first fuses co-dependent representations of the question and the document in order to focus on relevant parts of both. Then a dynamic pointing decoder iterates over potential answer spans. This iterative procedure enables the model to recover from initial local maxima corresponding to incorrect answers. On the Stanford question answering dataset, a single DCN model improves the previous state of the art from 71.0 F1 to 75.9 , while a DCN ensemble obtains 80.4 F1.",
"Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by (2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by (2016) using logistic regression and manually crafted features.",
"Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN DailyMail cloze test."
]
} |
1706.03610 | 2626667877 | Factoid question answering (QA) has recently benefited from the development of deep learning (DL) systems. Neural network models outperform traditional approaches in domains where large datasets exist, such as SQuAD (ca. 100,000 questions) for Wikipedia articles. However, these systems have not yet been applied to QA in more specific domains, such as biomedicine, because datasets are generally too small to train a DL system from scratch. For example, the BioASQ dataset for biomedical QA comprises less then 900 factoid (single answer) and list (multiple answers) QA instances. In this work, we adapt a neural QA system trained on a large open-domain dataset (SQuAD, source) to a biomedical dataset (BioASQ, target) by employing various transfer learning techniques. Our network architecture is based on a state-of-the-art QA system, extended with biomedical word embeddings and a novel mechanism to answer list questions. In contrast to existing biomedical QA systems, our system does not rely on domain-specific ontologies, parsers or entity taggers, which are expensive to create. Despite this fact, our systems achieve state-of-the-art results on factoid questions and competitive results on list questions. | Question and context tokens are mapped to a high-dimensional vector space, for example via GloVe embeddings @cite_19 and (optionally) character embeddings @cite_3 . The token vectors are processed independently for question and context, usually by a recurrent neural network (RNN). This layer allows for interaction between question and context representations. Examples are Match-LSTM @cite_16 and Coattention @cite_14 . This layer assigns start and end scores to all of the context tokens, which can be done either statically @cite_16 @cite_3 or by a dynamic decoding process @cite_14 . | {
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_16",
"@cite_3"
],
"mid": [
"",
"2552027021",
"2516930406",
"2951815760"
],
"abstract": [
"",
"Several deep learning models have been proposed for question answering. However, due to their single-pass nature, they have no way to recover from local maxima corresponding to incorrect answers. To address this problem, we introduce the Dynamic Coattention Network (DCN) for question answering. The DCN first fuses co-dependent representations of the question and the document in order to focus on relevant parts of both. Then a dynamic pointing decoder iterates over potential answer spans. This iterative procedure enables the model to recover from initial local maxima corresponding to incorrect answers. On the Stanford question answering dataset, a single DCN model improves the previous state of the art from 71.0 F1 to 75.9 , while a DCN ensemble obtains 80.4 F1.",
"Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by (2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by (2016) using logistic regression and manually crafted features.",
"Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN DailyMail cloze test."
]
} |
1706.03610 | 2626667877 | Factoid question answering (QA) has recently benefited from the development of deep learning (DL) systems. Neural network models outperform traditional approaches in domains where large datasets exist, such as SQuAD (ca. 100,000 questions) for Wikipedia articles. However, these systems have not yet been applied to QA in more specific domains, such as biomedicine, because datasets are generally too small to train a DL system from scratch. For example, the BioASQ dataset for biomedical QA comprises less then 900 factoid (single answer) and list (multiple answers) QA instances. In this work, we adapt a neural QA system trained on a large open-domain dataset (SQuAD, source) to a biomedical dataset (BioASQ, target) by employing various transfer learning techniques. Our network architecture is based on a state-of-the-art QA system, extended with biomedical word embeddings and a novel mechanism to answer list questions. In contrast to existing biomedical QA systems, our system does not rely on domain-specific ontologies, parsers or entity taggers, which are expensive to create. Despite this fact, our systems achieve state-of-the-art results on factoid questions and competitive results on list questions. | fits into this schema, but reduces the complexity of the architecture by removing the interaction layer, while maintaining state-of-the-art performance @cite_9 . Instead of one or several interaction layers of RNNs, FastQA computes two simple word-in-question features for each token, which are appended to the embedding vectors before the encoding layer. We chose to base our work on this architecture because of its state-of-the-art performance, faster training time and reduced number of parameters. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2950748123"
],
"abstract": [
"Recent development of large-scale question answering (QA) datasets triggered a substantial amount of research into end-to-end neural architectures for QA. Increasingly complex systems have been conceived without comparison to simpler neural baseline systems that would justify their complexity. In this work, we propose a simple heuristic that guides the development of neural baseline systems for the extractive QA task. We find that there are two ingredients necessary for building a high-performing neural QA system: first, the awareness of question words while processing the context and second, a composition function that goes beyond simple bag-of-words modeling, such as recurrent neural networks. Our results show that FastQA, a system that meets these two requirements, can achieve very competitive performance compared with existing models. We argue that this surprising finding puts results of previous systems and the complexity of recent QA datasets into perspective."
]
} |
1706.03610 | 2626667877 | Factoid question answering (QA) has recently benefited from the development of deep learning (DL) systems. Neural network models outperform traditional approaches in domains where large datasets exist, such as SQuAD (ca. 100,000 questions) for Wikipedia articles. However, these systems have not yet been applied to QA in more specific domains, such as biomedicine, because datasets are generally too small to train a DL system from scratch. For example, the BioASQ dataset for biomedical QA comprises less then 900 factoid (single answer) and list (multiple answers) QA instances. In this work, we adapt a neural QA system trained on a large open-domain dataset (SQuAD, source) to a biomedical dataset (BioASQ, target) by employing various transfer learning techniques. Our network architecture is based on a state-of-the-art QA system, extended with biomedical word embeddings and a novel mechanism to answer list questions. In contrast to existing biomedical QA systems, our system does not rely on domain-specific ontologies, parsers or entity taggers, which are expensive to create. Despite this fact, our systems achieve state-of-the-art results on factoid questions and competitive results on list questions. | Unsupervised domain adaptation describes the task of learning a predictor in a while labeled training data only exists in a different . In the context of deep learning, a common method is to first train an autoencoder on a large unlabeled corpus from both domains and then use the learned input representations as input features to a network trained on the actual task using the labeled source domain dataset @cite_21 @cite_17 . Another approach is to learn the hidden representations directly on the target task. For example, domain-adversarial training optimizes the network such that it computes hidden representations that both help predictions on the source domain dataset and are indistinguishable from hidden representations of the unlabeled target domain dataset @cite_11 . These techniques cannot be straightforwardly applied to the question answering task, because they require a large corpus of biomedical question-context pairs (albeit no answers are required). | {
"cite_N": [
"@cite_21",
"@cite_11",
"@cite_17"
],
"mid": [
"22861983",
"1731081199",
"2949821452"
],
"abstract": [
"The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.",
"We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.",
"Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. Recently, they have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features. In contrast to SDAs, our approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters ? in fact, they are computed in closed-form. Consequently, mSDA, which can be implemented in only 20 lines of MATLAB^ TM , significantly speeds up SDAs by two orders of magnitude. Furthermore, the representations learnt by mSDA are as effective as the traditional SDAs, attaining almost identical accuracies in benchmark tasks."
]
} |
1706.03610 | 2626667877 | Factoid question answering (QA) has recently benefited from the development of deep learning (DL) systems. Neural network models outperform traditional approaches in domains where large datasets exist, such as SQuAD (ca. 100,000 questions) for Wikipedia articles. However, these systems have not yet been applied to QA in more specific domains, such as biomedicine, because datasets are generally too small to train a DL system from scratch. For example, the BioASQ dataset for biomedical QA comprises less then 900 factoid (single answer) and list (multiple answers) QA instances. In this work, we adapt a neural QA system trained on a large open-domain dataset (SQuAD, source) to a biomedical dataset (BioASQ, target) by employing various transfer learning techniques. Our network architecture is based on a state-of-the-art QA system, extended with biomedical word embeddings and a novel mechanism to answer list questions. In contrast to existing biomedical QA systems, our system does not rely on domain-specific ontologies, parsers or entity taggers, which are expensive to create. Despite this fact, our systems achieve state-of-the-art results on factoid questions and competitive results on list questions. | Progressive neural networks combat this issue by keeping the original parameters fixed and adding new units that can access previously learned features @cite_8 . Because this method adds a significant amount of new parameters which have to be trained from scratch, it is not well-suited if the target domain dataset is small. use fine-tuning, but add an additional term that punishes deviations from predictions with the original parameters. Another approach is to add an L2 loss which punishes deviation from the original parameters. apply this loss selectively on parameters which are important in the source domain. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2426267443"
],
"abstract": [
"Methods and systems for performing a sequence of machine learning tasks. One system includes a sequence of deep neural networks (DNNs), including: a first DNN corresponding to a first machine learning task, wherein the first DNN comprises a first plurality of indexed layers, and each layer in the first plurality of indexed layers is configured to receive a respective layer input and process the layer input to generate a respective layer output; and one or more subsequent DNNs corresponding to one or more respective machine learning tasks, wherein each subsequent DNN comprises a respective plurality of indexed layers, and each layer in a respective plurality of indexed layers with index greater than one receives input from a preceding layer of the respective subsequent DNN, and one or more preceding layers of respective preceding DNNs, wherein a preceding layer is a layer whose index is one less than the current index."
]
} |
1706.03216 | 2951347618 | In earlier work, we have shown that articulation rate in Swedish child-directed speech (CDS) increases as a function of the age of the child, even when utterance length and differences in articulation rate between subjects are controlled for. In this paper we show on utterance level in spontaneous Swedish speech that i) for the youngest children, articulation rate in CDS is lower than in adult-directed speech (ADS), ii) there is a significant negative correlation between articulation rate and surprisal (the negative log probability) in ADS, and iii) the increase in articulation rate in Swedish CDS as a function of the age of the child holds, even when surprisal along with utterance length and differences in articulation rate between speakers are controlled for. These results indicate that adults adjust their articulation rate to make it fit the linguistic capacity of the child. | However, less is known about the extent to which the characteristics of CDS change as the child grows older. In the case of AR, it has been shown that it increases in mothers' (n=16) CDS as a function of the age of the child for children from 0;4 to 1;4 of age in Korean (n=6), Sri Lankan Tamil (n=5) and Tagalog (n=5). This is referred to as speech rate'', but it is stated in a footnote that what is actually meant is AR, under the constraint of including utterances where silences never exceeded a duration of 300 ms. Utterance length was controlled for by choosing utterances around 5 s long @cite_1 . | {
"cite_N": [
"@cite_1"
],
"mid": [
"2302707036"
],
"abstract": [
"The speech rate and pitch (F0) characteristics of naturalistic, longitudinally recorded infant- and adult-directed speech are reported for three, genetically diverse languages. Previous research has suggested that the prosodic characteristics of infant-directed speech are slowed speech rate, raised mean pitch, and expanded pitch range relative to adult-directed speech. Sixteen mothers (5 Sri Lankan Tamil, 5 Tagalog, 6 Korean) were recorded in their homes during natural interactions with their young infants, and adults, over the course of 12 months beginning when the infant was 4 months old. Regression models indicated that the difference between infant- and adult-directed speech rates decreased across the first year of infants' development. Models of pitch revealed predicted differences between infant- and adult-directed speech but did not provide evidence for cross-linguistic or longitudinal effects within the time period investigated for the three languages. The universality of slowed speech rate, raised pitch, and expanded pitch range is discussed in light of individuals' highly variable implementation of these prosodic features in infant-directed speech."
]
} |
1706.03216 | 2951347618 | In earlier work, we have shown that articulation rate in Swedish child-directed speech (CDS) increases as a function of the age of the child, even when utterance length and differences in articulation rate between subjects are controlled for. In this paper we show on utterance level in spontaneous Swedish speech that i) for the youngest children, articulation rate in CDS is lower than in adult-directed speech (ADS), ii) there is a significant negative correlation between articulation rate and surprisal (the negative log probability) in ADS, and iii) the increase in articulation rate in Swedish CDS as a function of the age of the child holds, even when surprisal along with utterance length and differences in articulation rate between speakers are controlled for. These results indicate that adults adjust their articulation rate to make it fit the linguistic capacity of the child. | We ask three questions: 1) In adult-directed speech, is there a correlation between articulation rate and utterance length, on the one hand, and articulation rate and surprisal, on the other? 2) Is articulation rate in adult-directed speech higher than in child-directed speech, when controlling for utterance length, differences in articulation rate between speakers and surprisal? 3) Does the increase in articulation rate in child-directed speech as a function of child age shown in @cite_22 hold, even when controlling for surprisal? | {
"cite_N": [
"@cite_22"
],
"mid": [
"179461371"
],
"abstract": [
"This paper evaluates articulation rate measures and rate characteristics of read and spontaneous speech on the basis of a manually labelled database for German. The results of phone classification experiments for three different articulation rates only partially confirm our expectations. Phonetic explanations are suggested."
]
} |
1706.03472 | 2624738537 | Topological data analysis is an emerging mathematical concept for characterizing shapes in multi-scale data. In this field, persistence diagrams are widely used as a descriptor of the input data, and can distinguish robust and noisy topological properties. Nowadays, it is highly desired to develop a statistical framework on persistence diagrams to deal with practical data. This paper proposes a kernel method on persistence diagrams. A theoretical contribution of our method is that the proposed kernel allows one to control the effect of persistence, and, if necessary, noisy topological properties can be discounted in data analysis. Furthermore, the method provides a fast approximation technique. The method is applied into several problems including practical data in physics, and the results show the advantage compared to the existing kernel method on persistence diagrams. | This paper is an extended version of our ICML paper @cite_46 . The difference from this conference version is as follows: (i) Comparisons with other relevant methods, in particular, persistence landscapes and persistence images, have been added to this version. (ii) New experimental results in comparison with other relevant methods. (iii) Detailed proofs of the stability theorem has been added. | {
"cite_N": [
"@cite_46"
],
"mid": [
"2223193774"
],
"abstract": [
"Topological data analysis (TDA) is an emerging mathematical concept for characterizing shapes in complex data. In TDA, persistence diagrams are widely recognized as a useful descriptor of data, and can distinguish robust and noisy topological properties. This paper proposes a kernel method on persistence diagrams to develop a statistical framework in TDA. The proposed kernel satisfies the stability property and provides explicit control on the effect of persistence. Furthermore, the method allows a fast approximation technique. The method is applied into practical data on proteins and oxide glasses, and the results show the advantage of our method compared to other relevant methods on persistence diagrams."
]
} |
1706.03581 | 2613030841 | We design an Enriched Deep Recurrent Visual Attention Model (EDRAM) — an improved attention-based architecture for multiple object recognition. The proposed model is a fully differentiable unit that can be optimized end-to-end by using Stochastic Gradient Descent (SGD). The Spatial Transformer (ST) was employed as visual attention mechanism which allows to learn the geometric transformation of objects within images. With the combination of the Spatial Transformer and the powerful recurrent architecture, the proposed EDRAM can localize and recognize objects simultaneously. EDRAM has been evaluated on two publicly available datasets including MNIST Cluttered (with 70K cluttered digits) and SVHN (with up to 250k real world images of house numbers). Experiments show that it obtains superior performance as compared with the state-of-the-art models. | Recurrent models of visual attention have been attracting increasing interest in recent years. The Recurrent Attention Model (RAM) proposed by Mnih al @cite_5 employs a recurrent neural network (RNN) to integrate visual information (image patch or glimpse) over time. By REINFORCE optimization of the network @cite_21 , they achieved a huge reduction of computational cost as well as state-of-the-art performance on the MNIST Cluttered dataset. Ba al @cite_15 extended the glimpse network for multiple object recognition with visual attention. They introduced the Deep Recurrent Visual Attention Model (DRAM) that integrates a simple visual attention mechanism with the neural network based on Long Short-Term Memory (LSTM) gated recurrent unit @cite_4 . The REINFORCE learning rule, employed in @cite_5 to train their attention model, was used to learn the network where '' and what ''. Though the DRAM achieved superior result in a number of tasks such as the MNIST pair classification and SVHN recognition, the attention mechanism used is straightforward by extracting patches of fixed scales only, where the potential of the visual attention mechanism is far from fully exploited. | {
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_21",
"@cite_4"
],
"mid": [
"2147527908",
"1484210532",
"2119717200",
""
],
"abstract": [
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.",
"We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.",
"This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
""
]
} |
1706.03235 | 2626305543 | Communication is a critical factor for the big multi-agent world to stay organized and productive. Typically, most previous multi-agent "learning-to-communicate" studies try to predefine the communication protocols or use technologies such as tabular reinforcement learning and evolutionary algorithm, which can not generalize to changing environment or large collection of agents. In this paper, we propose an Actor-Coordinator-Critic Net (ACCNet) framework for solving "learning-to-communicate" problem. The ACCNet naturally combines the powerful actor-critic reinforcement learning technology with deep learning technology. It can efficiently learn the communication protocols even from scratch under partially observable environment. We demonstrate that the ACCNet can achieve better results than several baselines under both continuous and discrete action space environments. We also analyse the learned protocols and discuss some design considerations. | Both CommNet and DIAL are based on DQN @cite_12 for discrete action. BiCNet is based on actor-critic methods for continuous action. It uses bi-directional recurrent neural networks as the communication channels. This approach allows single agent to maintain its own internal state and share information with other collaborators at the same time. However, it assumes that agents can know the global Markov states of the environment, which is no so realistic except for some game environments. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2145339207"
],
"abstract": [
"An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action."
]
} |
1706.03235 | 2626305543 | Communication is a critical factor for the big multi-agent world to stay organized and productive. Typically, most previous multi-agent "learning-to-communicate" studies try to predefine the communication protocols or use technologies such as tabular reinforcement learning and evolutionary algorithm, which can not generalize to changing environment or large collection of agents. In this paper, we propose an Actor-Coordinator-Critic Net (ACCNet) framework for solving "learning-to-communicate" problem. The ACCNet naturally combines the powerful actor-critic reinforcement learning technology with deep learning technology. It can efficiently learn the communication protocols even from scratch under partially observable environment. We demonstrate that the ACCNet can achieve better results than several baselines under both continuous and discrete action space environments. We also analyse the learned protocols and discuss some design considerations. | Other relevant excellent studies include but not limited to @cite_25 @cite_20 @cite_1 . Those researchers have verified the possibility of learning communication protocols among agents. Nevertheless, we aim at providing a general framework to ease the learning of communication protocols among agents. | {
"cite_N": [
"@cite_1",
"@cite_25",
"@cite_20"
],
"mid": [
"2621379712",
"2602275733",
"2953119472"
],
"abstract": [
"Learning to communicate through interaction, rather than relying on explicit supervision, is often considered a prerequisite for developing a general AI. We study a setting where two agents engage in playing a referential game and, from scratch, develop a communication protocol necessary to succeed in this game. Unlike previous work, we require that messages they exchange, both at train and test time, are in the form of a language (i.e. sequences of discrete symbols). We compare a reinforcement learning approach and one using a differentiable relaxation (straight-through Gumbel-softmax estimator) and observe that the latter is much faster to converge and it results in more effective protocols. Interestingly, we also observe that the protocol we induce by optimizing the communication success exhibits a degree of compositionality and variability (i.e. the same information can be phrased in different ways), both properties characteristic of natural languages. As the ultimate goal is to ensure that communication is accomplished in natural language, we also perform experiments where we inject prior information about natural language into our model and study properties of the resulting protocol.",
"By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.",
"We introduce the first goal-driven training for visual question answering and dialog agents. Specifically, we pose a cooperative 'image guessing' game between two agents -- Qbot and Abot -- who communicate in natural language dialog so that Qbot can select an unseen image from a lineup of images. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end -- from pixels to multi-agent multi-round dialog to game reward. We demonstrate two experimental results. First, as a 'sanity check' demonstration of pure RL (from scratch), we show results on a synthetic world, where the agents communicate in ungrounded vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find that two bots invent their own communication protocol and start using certain symbols to ask answer about certain visual attributes (shape color style). Thus, we demonstrate the emergence of grounded language and communication among 'visual' dialog agents with no human supervision. Second, we conduct large-scale real-image experiments on the VisDial dataset, where we pretrain with supervised dialog data and show that the RL 'fine-tuned' agents significantly outperform SL agents. Interestingly, the RL Qbot learns to ask questions that Abot is good at, ultimately resulting in more informative dialog and a better team."
]
} |
1706.03086 | 2624936746 | The Long-Range Wide-Area Network (LoRaWAN) specification was released in 2015, primarily to support the Internet-of-Things by facilitating wireless communication over long distances. Since 2015, the role-out and adoption of LoRaWAN has seen a steep growth. To the best of our knowledge, we are the first to have extensively measured, analyzed, and modeled the performance, features, and use cases of an operational LoRaWAN, namely The Things Network. Our measurement data, as presented in this paper, cover the early stages up to the production-level deployment of LoRaWAN. In particular, we analyze packet payloads, radio-signal quality, and spatio-temporal aspects, to model and estimate the performance of LoRaWAN. We also use our empirical findings in simulations to estimate the packet-loss. | Vangelista @cite_4 present LoRa as one of the most promising technologies for the wide-area IoT'' and mention that LoRa exhibits certain advantages over the LPWAN technologies Sigfox , Weightless , and On-Ramp Wireless. The robust chirp signal modulation and the low energy usage in combination with the low cost of end-devices together with the fact that the LoRa Alliance is also actively marketing and pushing interoperability aspects, makes LoRaWAN an interesting choice among available LPWAN technologies. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2296421542"
],
"abstract": [
"The last years have seen the widespread diffusion of novel Low Power Wide Area Network (LPWAN) technologies, which are gaining momentum and commercial interest as enabling technologies for the Internet of Things. In this paper we discuss some of the most interesting LPWAN solutions, focusing in particular on LoRa™, one of the last born and most promising technologies for the wide-area IoT."
]
} |
1706.03086 | 2624936746 | The Long-Range Wide-Area Network (LoRaWAN) specification was released in 2015, primarily to support the Internet-of-Things by facilitating wireless communication over long distances. Since 2015, the role-out and adoption of LoRaWAN has seen a steep growth. To the best of our knowledge, we are the first to have extensively measured, analyzed, and modeled the performance, features, and use cases of an operational LoRaWAN, namely The Things Network. Our measurement data, as presented in this paper, cover the early stages up to the production-level deployment of LoRaWAN. In particular, we analyze packet payloads, radio-signal quality, and spatio-temporal aspects, to model and estimate the performance of LoRaWAN. We also use our empirical findings in simulations to estimate the packet-loss. | In @cite_8 , Centenaro provide an overview of the LPWAN paradigm in the context of smart-city scenarios. The authors also test the coverage of a LoRaWAN gateway in a city in Italy, by using a single base-station without antenna gain. The covered area had a diameter of 1.2 km. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2950994229"
],
"abstract": [
"Connectivity is probably the most basic building block of the Internet of Things (IoT) paradigm. Up to know, the two main approaches to provide data access to the have been based either on multi-hop mesh networks using short-range communication technologies in the unlicensed spectrum, or on long-range, legacy cellular technologies, mainly 2G GSM, operating in the corresponding licensed frequency bands. Recently, these reference models have been challenged by a new type of wireless connectivity, characterized by low-rate, long-range transmission technologies in the unlicensed sub-GHz frequency bands, used to realize access networks with star topology which are referred to a (LPWANs). In this paper, we introduce this new approach to provide connectivity in the IoT scenario, discussing its advantages over the established paradigms in terms of efficiency, effectiveness, and architectural design, in particular for the typical Smart Cities applications."
]
} |
1706.03086 | 2624936746 | The Long-Range Wide-Area Network (LoRaWAN) specification was released in 2015, primarily to support the Internet-of-Things by facilitating wireless communication over long distances. Since 2015, the role-out and adoption of LoRaWAN has seen a steep growth. To the best of our knowledge, we are the first to have extensively measured, analyzed, and modeled the performance, features, and use cases of an operational LoRaWAN, namely The Things Network. Our measurement data, as presented in this paper, cover the early stages up to the production-level deployment of LoRaWAN. In particular, we analyze packet payloads, radio-signal quality, and spatio-temporal aspects, to model and estimate the performance of LoRaWAN. We also use our empirical findings in simulations to estimate the packet-loss. | The expected coverage of LPWANs and especially LoRa was also analyzed by Pet "aj "aj "arvi @cite_9 , who conducted measurements in Finland. Using a single base-station with an antenna gain of 2 dBi and configuring the nodes to send packets at SF12 using 14 dBm of transmit power, connectivity within a 5 km range in urban environments and 15 km in open space were found to result in packet-loss ratios smaller than 30 "aj "aj "arvi @cite_0 also tested the usage of LoRa in indoor environments. The results show that very low packet-loss is to be expected with only one base-station to cover an average university campus. | {
"cite_N": [
"@cite_0",
"@cite_9"
],
"mid": [
"2471815506",
"2243299349"
],
"abstract": [
"Low power consumption, low transceiver chip cost and large coverage area are the main characteristics of the low power wide area networks (LPWAN) technologies. We expect that LPWAN can be part of enabling new human-centric health and wellness monitoring applications. Therefore in this work we study the indoor performance of one LPWAN technology, namely LoRa, by the means of real-life measurements. The measurements were conducted using the commercially available equipment in the main campus of the University of Oulu, Finland, which has an indoor area spanning for over 570 meters North to South and over 320 meters East to West. The measurements were executed for a sensor node operating close to human body that was periodically reporting the sensed data to a base station. The obtained results show that when using 14 dBm transmit power and the largest spreading factor of 12 for the 868 MHz ISM band, the whole campus area can be covered. Measured packet success delivery ratio was 96.7 without acknowledgements and retransmissions.",
"In addition to long battery life and low cost, coverage is one of the most critical performance metrics for the low power wide area networks (LPWAN). In this work we study the coverage of the recently developed LoRa LPWAN technology via real-life measurements. The experiments were conducted in the city of Oulu, Finland, using the commercially available equipment. The measurements were executed for cases when a node located on ground (attached on the roof rack of a car) or on water (attached to the radio mast of a boat) reporting their data to a base station. For a node operating in the 868 MHz ISM band using 14 dBm transmit power and the maximum spreading factor, we have observed the maximum communication range of over 15 km on ground and close to 30 km on water. Besides the actual measurements, in the paper we also present a channel attenuation model derived from the measurement data. The model can be used to estimate the path loss in 868 MHz ISM band in an area similar to Oulu, Finland."
]
} |
1706.03086 | 2624936746 | The Long-Range Wide-Area Network (LoRaWAN) specification was released in 2015, primarily to support the Internet-of-Things by facilitating wireless communication over long distances. Since 2015, the role-out and adoption of LoRaWAN has seen a steep growth. To the best of our knowledge, we are the first to have extensively measured, analyzed, and modeled the performance, features, and use cases of an operational LoRaWAN, namely The Things Network. Our measurement data, as presented in this paper, cover the early stages up to the production-level deployment of LoRaWAN. In particular, we analyze packet payloads, radio-signal quality, and spatio-temporal aspects, to model and estimate the performance of LoRaWAN. We also use our empirical findings in simulations to estimate the packet-loss. | Bor @cite_6 conducted experiments using multiple nodes transmitting data using LoRa. Experiments were conducted in which two devices sent packets at different power levels, but the same spreading factor, to estimate the influence of concurrent transmissions. Additionally, a new media access control (MAC), LoRaBlink, was developed to enable direct connection of nodes without using LoRaWAN. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2294352804"
],
"abstract": [
"New transceiver technologies have emerged which enable power efficient communication over very long distances. Examples of such Low-Power Wide-Area Network (LPWAN) technologies are LoRa, Sigfox and Weightless. A typical application scenario for these technologies is city wide meter reading collection where devices send readings at very low frequency over a long distance to a data concentrator (one-hop networks). We argue that these transceivers are potentially very useful to construct more generic Internet of Things (IoT) networks incorporating multi-hop bidirectional communication enabling sensing and actuation. Furthermore, these transceivers have interesting features not available with more traditional transceivers used for IoT networks which enable construction of novel protocol elements. In this paper we present a performance and capability analysis of a currently available LoRa transceiver. We describe its features and then demonstrate how such transceiver can be put to use efficiently in a wide-area application scenario. In particular we demonstrate how unique features such as concurrent non-destructive transmissions and carrier detection can be employed. Our deployment experiment demonstrates that 6 LoRa nodes can form a network covering 1.5 ha in a built up environment, achieving a potential lifetime of 2 year on 2 AA batteries and delivering data within 5 s and reliability of 80 ."
]
} |
1706.03016 | 2917092067 | Electronic tickets (e-tickets) are electronic versions of paper tickets, which enable users to access intended services and improve services' efficiency. However, privacy may be a concern of e-ticket users. In this paper, a privacy-preserving electronic ticket scheme with attribute-based credentials is proposed to protect users' privacy and facilitate ticketing based on a user's attributes. Our proposed scheme makes the following contributions: (1) users can buy different tickets from ticket sellers without releasing their exact attributes; (2) two tickets of the same user cannot be linked; (3) a ticket cannot be transferred to another user; (4) a ticket cannot be double spent; (5) the security of the proposed scheme is formally proven and reduced to well known (q-strong Diffie-Hellman) complexity assumption; (6) the scheme has been implemented and its performance empirically evaluated. To the best of our knowledge, our privacy-preserving attribute-based e-ticket scheme is the first one providing these five features. Application areas of our scheme include event or transport tickets where users must convince ticket sellers that their attributes (e.g. age, profession, location) satisfy the ticket price policies to buy discounted tickets. More generally, our scheme can be used in any system where access to services is only dependent on a user's attributes (or entitlements) but not their identities. | E-Ticket Schemes from Anonymous Credentials. In an anonymous credential scheme, a user can prove to a verifier that she has obtained a credential without releasing any other information. Heydt-Benjamin @cite_8 used anonymous credentials, e-cash and proxy re-encryption schemes to enhance the security and privacy of their public transport e-ticket systems. Arfaoui @cite_0 modified the signature scheme proposed Boneh al in @cite_58 to eliminate expensive pairing operations in the verification phase, and then proposed a privacy-preserving near field communication (NFC) mobile ticket (m-ticket) system by combining their modified signature with the anonymous credential scheme proposed by Camenisch al @cite_47 . In their scheme, a user can anonymously use an m-ticket at most @math times, otherwise the user is revoked by the revocation authority. These schemes can implement anonymity, ticket unlinkability as well as ticket untransferability, but, unlike our scheme, do not support privacy-preserving attribute-based ticketing. Additionally, the security of these schemes was not formally proven. | {
"cite_N": [
"@cite_0",
"@cite_47",
"@cite_58",
"@cite_8"
],
"mid": [
"896930966",
"1888254701",
"2117797270",
"1559779397"
],
"abstract": [
"To ensure the privacy of users in transport systems, researchers are working on new protocols providing the best security guarantees while respecting functional requirements of transport operators. In this paper, we design a secure NFC m-ticketing protocol for public transport that preserves users' anonymity and prevents transport operators from tracing their customers' trips. To this end, we introduce a new practical set-membership proof that does not require provers nor verifiers (but in a specific scenario for verifiers) to perform pairing computations. It is therefore particularly suitable for our (ticketing) setting where provers hold SIM UICC cards that do not support such costly computations. We also propose several optimizations of Boneh-Boyen type signature schemes, which are of independent interest, increasing their performance and efficiency during NFC transactions. Our m-ticketing protocol offers greater flexibility compared to previous solutions as it enables the post-payment and the off-line validation of m-tickets. By implementing a prototype using a standard NFC SIM card, we show that it fulfils the stringent functional requirement imposed by transport operators whilst using strong security parameters. In particular, a validation can be completed in 184.25 ms when the mobile is switched on, and in 266.52 ms when the mobile is switched off or its battery is flat.",
"We propose a new and efficient signature scheme that is provably secure in the plain model. The security of our scheme is based on a discrete-logarithm-based assumption put forth by Lysyanskaya, Rivest, Sahai, and Wolf (LRSW) who also showed that it holds for generic groups and is independent of the decisional Diffie-Hellman assumption. We prove security of our scheme under the LRSW assumption for groups with bilinear maps. We then show how our scheme can be used to construct efficient anonymous credential systems as well as group signature and identity escrow schemes. To this end, we provide efficient protocols that allow one to prove in zero-knowledge the knowledge of a signature on a committed (or encrypted) message and to obtain a signature on a committed message.",
"We describe a short signature scheme which is existentially unforgeable under a chosen message attack without using random oracles. The security of our scheme depends on a new complexity assumption we call the Strong Diffie-Hellman assumption. This assumption has similar properties to the Strong RSA assumption, hence the name. Strong RSA was previously used to construct signature schemes without random oracles. However, signatures generated by our scheme are much shorter and simpler than signatures from schemes based on Strong RSA. Furthermore, our scheme provides a limited form of message recovery.",
"We propose an application of recent advances in e-cash, anonymous credentials, and proxy re-encryption to the problem of privacy in public transit systems with electronic ticketing. We discuss some of the interesting features of transit ticketing as a problem domain, and provide an architecture sufficient for the needs of a typical metropolitan transit system. Our system maintains the security required by the transit authority and the user while significantly increasing passenger privacy. Our hybrid approach to ticketing allows use of passive RFID transponders as well as higher powered computing devices such as smartphones or PDAs. We demonstrate security and privacy features offered by our hybrid system that are unavailable in a homogeneous passive transponder architecture, and which are advantageous for users of passive as well as active devices."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.