aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1512.05246 | 2218408410 | Most deep architectures for image classification–even those that are trained to classify a large number of diverse categories–learn shared image representations with a single model. Intuitively, however, categories that are more similar should share more information than those that are very different. While hierarchical deep networks address this problem by learning separate features for subsets of related categories, current implementations require simplified models using fixed architectures specified via heuristic clustering methods. Instead, we propose Blockout, a method for regularization and model selection that simultaneously learns both the model architecture and parameters. A generalization of Dropout, our approach gives a novel parametrization of hierarchical architectures that allows for structure learning via back-propagation. To demonstrate its utility, we evaluate Blockout on the CIFAR and Image Net datasets, demonstrating improved classification accuracy, better regularization performance, faster training, and the clear emergence of hierarchical network structures. | Hierarchical deep networks @cite_7 @cite_16 attempt to address these issues by learning multi-task models with shared lower layers and parallel, domain-specific higher layers for predicting different subsets of categories. While these methods address one component of model selection by learning clusters of output categories, other architectural hyper-parameters such as the location of branches and the relative allocation of nodes between them must still be specified prior to training. Furthermore, these methods require separate steps for clustering and training while our approach automatically addresses these modeling choices in a single stage of end-to-end training. | {
"cite_N": [
"@cite_16",
"@cite_7"
],
"mid": [
"2402669826",
"2962719937"
],
"abstract": [
"Hierarchical branching deep convolutional neural networks (HD-CNNs) improve existing convolutional neural network (CNN) technology. In a HD-CNN, classes that can be easily distinguished are classified in a higher layer coarse category CNN, while the most difficult classifications are done on lower layer fine category CNNs. Multinomial logistic loss and a novel temporal sparsity penalty may be used in HD-CNN training. The use of multinomial logistic loss and a temporal sparsity penalty causes each branching component to deal with distinct subsets of categories.",
"We study the problem of large scale, multi-label visual recognition with a large number of possible classes. We propose a method for augmenting a trained neural network classifier with auxiliary capacity in a manner designed to significantly improve upon an already well-performing model, while minimally impacting its computational footprint. Using the predictions of the network itself as a descriptor for assessing visual similarity, we define a partitioning of the label space into groups of visually similar entities. We then augment the network with auxilliary hidden layer pathways with connectivity only to these groups of label units. We report a significant improvement in mean average precision on a large-scale object recognition task with the augmented model, while increasing the number of multiply-adds by less than 3 ."
]
} |
1512.05246 | 2218408410 | Most deep architectures for image classification–even those that are trained to classify a large number of diverse categories–learn shared image representations with a single model. Intuitively, however, categories that are more similar should share more information than those that are very different. While hierarchical deep networks address this problem by learning separate features for subsets of related categories, current implementations require simplified models using fixed architectures specified via heuristic clustering methods. Instead, we propose Blockout, a method for regularization and model selection that simultaneously learns both the model architecture and parameters. A generalization of Dropout, our approach gives a novel parametrization of hierarchical architectures that allows for structure learning via back-propagation. To demonstrate its utility, we evaluate Blockout on the CIFAR and Image Net datasets, demonstrating improved classification accuracy, better regularization performance, faster training, and the clear emergence of hierarchical network structures. | The most common approach for model selection in deep learning is simply searching over the space of hyper-parameters @cite_17 . Unfortunately, because training and inference in deep networks are computationally expensive, this is often impractical. While costs can sometimes be reduced (e.g. by taking advantage of the behavior of some network architectures with random weights @cite_13 ), they still require training and evaluating a large number of models. Bayesian optimization approaches @cite_15 attempt to perform this search more efficiently, but they are still typically applied only to smaller models with few hyper-parameters. Alternatively, @cite_1 proposed a theoretically-justified approach to learning a deep network with a layer-wise strategy that automatically selects the appropriate number of nodes during training. However, it is unclear how it would perform on large-scale image classification benchmarks. | {
"cite_N": [
"@cite_1",
"@cite_15",
"@cite_13",
"@cite_17"
],
"mid": [
"2143915663",
"2131241448",
"78356000",
"2097998348"
],
"abstract": [
"We give algorithms with provable guarantees that learn a class of deep nets in the generative model view popularized by Hinton and others. Our generative model is an n node multilayer network that has degree at most nγ for some γ < 1 and each edge has a random edge weight in [-1, 1]. Our algorithm learns almost all networks in this class with polynomial running time. The sample complexity is quadratic or cubic depending upon the details of the model. The algorithm uses layerwise learning. It is based upon a novel idea of observing correlations among features and using these to infer the underlying edge structure via a global graph recovery procedure. The analysis of the algorithm reveals interesting structure of neural nets with random edge weights.",
"The use of machine learning algorithms frequently involves careful tuning of learning parameters and model hyperparameters. Unfortunately, this tuning is often a \"black art\" requiring expert experience, rules of thumb, or sometimes brute-force search. There is therefore great appeal for automatic approaches that can optimize the performance of any given learning algorithm to the problem at hand. In this work, we consider this problem through the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). We show that certain choices for the nature of the GP, such as the type of kernel and the treatment of its hyperparameters, can play a crucial role in obtaining a good optimizer that can achieve expertlevel performance. We describe new algorithms that take into account the variable cost (duration) of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks.",
"Recently two anomalous results in the literature have shown that certain feature learning architectures can yield useful features for object recognition tasks even with untrained, random weights. In this paper we pose the question: why do random weights sometimes do so well? Our answer is that certain convolutional pooling architectures can be inherently frequency selective and translation invariant, even with random weights. Based on this we demonstrate the viability of extremely fast architecture search by using random weights to evaluate candidate architectures, thereby sidestepping the time-consuming learning process. We then show that a surprising fraction of the performance of certain state-of-the-art methods can be attributed to the architecture alone.",
"Grid search and manual search are the most widely used strategies for hyper-parameter optimization. This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid. Empirical evidence comes from a comparison with a large previous study that used grid search and manual search to configure neural networks and deep belief networks. Compared with neural networks configured by a pure grid search, we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time. Granting random search the same computational budget, random search finds better models by effectively searching a larger, less promising configuration space. Compared with deep belief networks configured by a thoughtful combination of manual search and grid search, purely random search over the same 32-dimensional configuration space found statistically equal performance on four of seven data sets, and superior performance on one of seven. A Gaussian process analysis of the function from hyper-parameters to validation set performance reveals that for most data sets only a few of the hyper-parameters really matter, but that different hyper-parameters are important on different data sets. This phenomenon makes grid search a poor choice for configuring algorithms for new data sets. Our analysis casts some light on why recent \"High Throughput\" methods achieve surprising success--they appear to search through a large number of hyper-parameters because most hyper-parameters do not matter much. We anticipate that growing interest in large hierarchical models will place an increasing burden on techniques for hyper-parameter optimization; this work shows that random search is a natural baseline against which to judge progress in the development of adaptive (sequential) hyper-parameter optimization algorithms."
]
} |
1512.05246 | 2218408410 | Most deep architectures for image classification–even those that are trained to classify a large number of diverse categories–learn shared image representations with a single model. Intuitively, however, categories that are more similar should share more information than those that are very different. While hierarchical deep networks address this problem by learning separate features for subsets of related categories, current implementations require simplified models using fixed architectures specified via heuristic clustering methods. Instead, we propose Blockout, a method for regularization and model selection that simultaneously learns both the model architecture and parameters. A generalization of Dropout, our approach gives a novel parametrization of hierarchical architectures that allows for structure learning via back-propagation. To demonstrate its utility, we evaluate Blockout on the CIFAR and Image Net datasets, demonstrating improved classification accuracy, better regularization performance, faster training, and the clear emergence of hierarchical network structures. | A parallel but related task to model selection is regularization. A network with too much capacity (e.g. with too many parameters) can easily overfit without sufficient training data resulting in poor generalization performance. While the size of the model could be reduced, an easier and often more effective approach is to use regularization. Common methods include imposing constraints on the weights (e.g. through convolution or weight decay), rescaling or whitening internal representations for better conditioning @cite_8 @cite_24 , or randomly perturbing activations for improved robustness and better generalizability @cite_6 @cite_11 @cite_9 . | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_24",
"@cite_11"
],
"mid": [
"2963857170",
"2152722485",
"1904365287",
"1836465849",
"4919037"
],
"abstract": [
"We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset.",
"Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learning algorithm, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset.",
"When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This \"overfitting\" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random \"dropout\" gives big improvements on many benchmark tasks and sets new records for speech and object recognition.",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82 top-5 test error, exceeding the accuracy of human raters.",
"We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models."
]
} |
1512.05228 | 2951846589 | Radio frequency identification (RFID) technology has been widely used in missing tag detection to reduce and avoid inventory shrinkage. In this application, promptly finding out the missing event is of paramount importance. However, existing missing tag detection protocols cannot efficiently handle the presence of a large number of unexpected tags whose IDs are not known to the reader, which shackles the time efficiency. To deal with the problem of detecting missing tags in the presence of unexpected tags, this paper introduces a two-phase Bloom filter-based missing tag detection protocol (BMTD). The proposed BMTD exploits Bloom filter in sequence to first deactivate the unexpected tags and then test the membership of the expected tags, thus dampening the interference from the unexpected tags and considerably reducing the detection time. Moreover, the theoretical analysis of the protocol parameters is performed to minimize the detection time of the proposed BMTD and achieve the required reliability simultaneously. Extensive experiments are then conducted to evaluate the performance of the proposed BMTD. The results demonstrate that the proposed BMTD significantly outperforms the state-of-the-art solutions. | Extensive research efforts have been devoted to detecting missing tags by using probabilistic method @cite_23 @cite_19 @cite_6 @cite_20 and deterministic method @cite_2 @cite_3 @cite_32 . Next, we briefly review the existing solutions of missing tag detection problem. | {
"cite_N": [
"@cite_32",
"@cite_6",
"@cite_3",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_20"
],
"mid": [
"2034244766",
"2000922528",
"2143946380",
"2059996234",
"2128935842",
"2102471667",
"1586015882"
],
"abstract": [
"Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45 of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.",
"Radio frequency identification (RFID) technologies are poised to revolutionize retail, warehouse, and supply chain management. One of their interesting applications is to automatically detect missing tags in a large storage space, which may have to be performed frequently to catch any missing event such as theft in time. Because RFID systems typically work under low-rate channels, past research has focused on reducing execution time of a detection protocol to prevent excessively long protocol execution from interfering normal inventory operations. However, when active tags are used for a large spatial coverage, energy efficiency becomes critical in prolonging the lifetime of these battery-powered tags. Furthermore, much of the existing literature assumes that the channel between a reader and tags is reliable, which is not always true in reality because of noise interference in the environment. Given these concerns, this paper makes three contributions. First, we propose a novel protocol design that considers both energy efficiency and time efficiency. It achieves multifold reduction in both energy cost and execution time when compared to the best existing work. Second, we reveal a fundamental energy-time tradeoff in missing-tag detection, which can be flexibly controlled through a couple of system parameters in order to achieve desirable performance. Third, we extend our protocol design to consider channel error under two different models. We find that energy time cost will be higher in unreliable channel conditions, but the energy-time tradeoff relation persists.",
"RFID (radio-frequency identification) is an emerging technology with extensive applications such as transportation and logistics, object tracking, and inventory management. How to quickly identify the missing RFID tags and thus their associated objects is a practically important problem in many large-scale RFID systems. This paper presents three novel methods to quickly identify the missing tags in a large-scale RFID system of thousands of tags. Our protocols can reduce the time for identifying all the missing tags by up to 75 in comparison to the state of art.",
"RFID (radio frequency identification) technologies are poised to revolutionize retail, warehouse and supply chain management. One of their interesting applications is to automatically detect missing tags (and the associated objects) in a large storage space. In order to timely catch any missing event such as theft, the detection operation may have to be performed frequently. Because RFID systems typically work under low-rate channels, past research has focused on reducing execution time of a detection protocol, in order to prevent excessively-long protocol execution from interfering normal inventory operations. However, when active tags are used to provide a large spatial coverage, energy efficiency becomes critical in prolonging the lifetime of these battery-powered tags. Existing literature lacks thorough study on how to conserve energy in the process of missing-tag detection and how to jointly optimize energy efficiency and time efficiency. This paper makes two important contributions: First, we propose a novel protocol design that takes both energy efficiency and time efficiency into consideration. It achieves multi-fold reduction in both energy cost and execution time when comparing with the best existing work. In some cases, the reduction is more than an order of magnitude. Second, we reveal a fundamental energy-time tradeoff in missing-tag detection. Through our analytical framework, we are able to flexibly control the tradeoff through a couple of system parameters in order to achieve desirable performance.",
"As RFID tags become more widespread, new approaches for managing larger numbers of RFID tags will be needed. In this paper, we consider the problem of how to accurately and efficiently monitor a set of RFID tags for missing tags. Our approach accurately monitors a set of tags without collecting IDs from them. It differs from traditional research which focuses on faster ways for collecting IDs from every tag. We present two monitoring protocols, one designed for a trusted reader and another for an untrusted reader.",
"Comparing with the classical barcode system, RFID extends the operational distance from inches to a number of feet (passive RFID tags) or even hundreds of feet (active RFID tags). Their wireless transmission, processing and storage capabilities enable them to support the full automation of many inventory management functions in the industry. This paper studies the practically important problem of monitoring a large set of RFID tags and identifying the missing ones - the objects that the missing tags are associated with are likely to be missing, too. This monitoring function may need to be executed frequently and therefore should be made efficient in terms of execution time, in order to avoid disruption of normal inventory operations. Based on probabilistic methods, we design a series of missing-tag identification protocols that employ novel techniques to reduce the execution time. Our best protocol reduces the time for detecting the missing tags by 88.9 or more, when comparing with existing protocols.",
"RFID systems have been deployed to detect missing products by affixing them with cheap passive RFID tags and monitoring them with RFID readers. Existing missing tag detection protocols require the tag population to contain only those tags whose IDs are already known to the reader. However, in reality, tag populations often contain tags with unknown IDs, called unexpected tags, and cause unexpected false positives i.e., due to them, missing tags are detected as present. We take the first step towards addressing the problem of detecting the missing tags from a population that contains unexpected tags. Our protocol, RUN, mitigates the adverse effects of unexpected false positives by executing multiple frames with different seeds. It minimizes the missing tag detection time by first estimating the number of unexpected tags and then using it along with the false positive probability to obtain optimal frame sizes and number of times Aloha frames should be executed. RUN works with multiple readers with overlapping regions. It is easy to deploy because it is implemented on readers as a software module and does not require modifications to tags or to the communication protocol between tags and readers. We implemented RUN along with four major missing tag detection protocols and the fastest tag ID collection protocol and compared them side-by-side. Our experimental results show that RUN always achieves the required reliability whereas the best existing protocol achieves a maximum reliability of only 67 ."
]
} |
1512.05228 | 2951846589 | Radio frequency identification (RFID) technology has been widely used in missing tag detection to reduce and avoid inventory shrinkage. In this application, promptly finding out the missing event is of paramount importance. However, existing missing tag detection protocols cannot efficiently handle the presence of a large number of unexpected tags whose IDs are not known to the reader, which shackles the time efficiency. To deal with the problem of detecting missing tags in the presence of unexpected tags, this paper introduces a two-phase Bloom filter-based missing tag detection protocol (BMTD). The proposed BMTD exploits Bloom filter in sequence to first deactivate the unexpected tags and then test the membership of the expected tags, thus dampening the interference from the unexpected tags and considerably reducing the detection time. Moreover, the theoretical analysis of the protocol parameters is performed to minimize the detection time of the proposed BMTD and achieve the required reliability simultaneously. Extensive experiments are then conducted to evaluate the performance of the proposed BMTD. The results demonstrate that the proposed BMTD significantly outperforms the state-of-the-art solutions. | The objective of deterministic protocols is to exactly identify which tags are absent. Li develop a series of protocols in @cite_2 which intend to reduce the radio collision and identify a tag not in the ID level but in the bit level. Subsequently, Zhang propose another series of determine protocols in @cite_3 of which the main idea is to store the bitmap of tag responses in all rounds and compare them to determine the present and absent tags. But how to configure the protocol parameters is not theoretically analyzed. More recently, Liu @cite_32 enhance the work by reconciling both 2-collision and 3-collision slots and filtering the empty slots such that the time efficiency can be improved. None of existing deterministic protocols, however, have been designed to work in the chaotic environment with unexpected tags. | {
"cite_N": [
"@cite_32",
"@cite_3",
"@cite_2"
],
"mid": [
"2034244766",
"2143946380",
"2102471667"
],
"abstract": [
"Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45 of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.",
"RFID (radio-frequency identification) is an emerging technology with extensive applications such as transportation and logistics, object tracking, and inventory management. How to quickly identify the missing RFID tags and thus their associated objects is a practically important problem in many large-scale RFID systems. This paper presents three novel methods to quickly identify the missing tags in a large-scale RFID system of thousands of tags. Our protocols can reduce the time for identifying all the missing tags by up to 75 in comparison to the state of art.",
"Comparing with the classical barcode system, RFID extends the operational distance from inches to a number of feet (passive RFID tags) or even hundreds of feet (active RFID tags). Their wireless transmission, processing and storage capabilities enable them to support the full automation of many inventory management functions in the industry. This paper studies the practically important problem of monitoring a large set of RFID tags and identifying the missing ones - the objects that the missing tags are associated with are likely to be missing, too. This monitoring function may need to be executed frequently and therefore should be made efficient in terms of execution time, in order to avoid disruption of normal inventory operations. Based on probabilistic methods, we design a series of missing-tag identification protocols that employ novel techniques to reduce the execution time. Our best protocol reduces the time for detecting the missing tags by 88.9 or more, when comparing with existing protocols."
]
} |
1512.05219 | 2950649575 | When learning a hidden Markov model (HMM), sequen- tial observations can often be complemented by real-valued summary response variables generated from the path of hid- den states. Such settings arise in numerous domains, includ- ing many applications in biology, like motif discovery and genome annotation. In this paper, we present a flexible frame- work for jointly modeling both latent sequence features and the functional mapping that relates the summary response variables to the hidden state sequence. The algorithm is com- patible with a rich set of mapping functions. Results show that the availability of additional continuous response vari- ables can simultaneously improve the annotation of the se- quential observations and yield good prediction performance in both synthetic data and real-world datasets. | Previous research has also examined the use of continuous side information to complement sequential observation when learning latent features. Input-output HMMs @cite_21 and TRBMs @cite_11 consider the task of supervised learning where the observed variables influence both latent and output variables. However, these approaches assume that input observations for each time point are known. GCHMMs @cite_14 consider an HMM where transition probabilities are associated with side information. CRBMs @cite_23 incorporate side information into the factorization of weights in conditional RBMs. The RUPA method @cite_0 learns an HMM with additional responses via augmented visit aggregation. However, point estimate approximation over expected visits ignores the true distribution of visits. As a result, convergence problems may arise, especially when the distribution of visits has multiple modes. In contrast, our method uses Viterbi path integration to directly approximate the true posterior of visits. In summary, we consider the responses as both extra information for learning a latent representation of sequential observation, and a variable to predict. Considering the dual task simultaneously enforces interpretability, while maintaining predictive power. | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_0",
"@cite_23",
"@cite_11"
],
"mid": [
"2006844123",
"",
"2951441074",
"2115096495",
"2097006534"
],
"abstract": [
"The purpose of this study is to leverage modern technology (mobile or web apps) to enrich epidemiology data and infer the transmission of disease. We develop hierarchical Graph-Coupled Hidden Markov Models (hGCHMMs) to simultaneously track the spread of infection in a small cell phone community and capture person-specific infection parameters by leveraging a link prior that incorporates additional covariates. In this paper we investigate two link functions, the beta-exponential link and sigmoid link, both of which allow the development of a principled Bayesian hierarchical framework for disease transmission. The results of our model allow us to predict the probability of infection for each persons on each day, and also to infer personal physical vulnerability and the relevant association with covariates. We demonstrate our approach theoretically and experimentally on both simulation data and real epidemiological records.",
"",
"We consider the task of learning mappings from sequential data to real-valued responses. We present and evaluate an approach to learning a type of hidden Markov model (HMM) for regression. The learning process involves inferring the structure and parameters of a conventional HMM, while simultaneously learning a regression model that maps features that characterize paths through the model to continuous responses. Our results, in both synthetic and biological domains, demonstrate the value of jointly learning the two components of our approach.",
"The Conditional Restricted Boltzmann Machine (CRBM) is a recently proposed model for time series that has a rich, distributed hidden state and permits simple, exact inference. We present a new model, based on the CRBM that preserves its most important computational properties and includes multiplicative three-way interactions that allow the effective interaction weight between two units to be modulated by the dynamic state of a third unit. We factor the three-way weight tensor implied by the multiplicative model, reducing the number of parameters from O(N3) to O(N2). The result is an efficient, compact model whose effectiveness we demonstrate by modeling human motion. Like the CRBM, our model can capture diverse styles of motion with a single set of parameters, and the three-way interactions greatly improve the model's ability to blend motion styles or to transition smoothly among them.",
"We present a type of Temporal Restricted Boltzmann Machine that defines a probability distribution over an output sequence conditional on an input sequence. It shares the desirable properties of RBMs: efficient exact inference, an exponentially more expressive latent state than HMMs, and the ability to model nonlinear structure and dynamics. We apply our model to a challenging real-world graphics problem: facial expression transfer. Our results demonstrate improved performance over several baselines modeling high-dimensional 2D and 3D data."
]
} |
1512.05256 | 2210954309 | One of the major challenges in applications related to social networks, computational biology, collaboration networks etc., is to efficiently search for similar patterns in their underlying graphs. These graphs are typically noisy and contain thousands of vertices and millions of edges. In many cases, the graphs are unlabelled and the notion of similarity is also not well defined. We study the problem of searching an induced sub graph in a large target graph that is most similar to the given query graph. We assume that the query graph and target graph are undirected and unlabelled. We use graph let kernels [1] to define graph similarity. Graph let kernels are known to perform better than other kernels in different applications. | Similarity based graph searching has been studied in the past under various settings. In many of the previous works, it is assumed that the graphs are labeled. In one class of problems, a large database of graphs is given and the goal is to find the most similar match in the database with respect to the given query graph @cite_0 @cite_8 @cite_27 @cite_2 @cite_30 @cite_12 . In the second class, given a target graph and a query graph, subgraph of the target graph that is most similar to the query graph needs to be identified @cite_19 @cite_16 @cite_7 @cite_33 . Different notions of similarity were also explored in the past for these classes of problems. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_16",
"@cite_12"
],
"mid": [
"2083235381",
"2016057533",
"",
"2110034858",
"2050137450",
"2020657191",
"2012066443",
"2101511590",
"1509240356",
"1974124880"
],
"abstract": [
"With the emergence of new applications, e.g., computational biology, new software engineering techniques, social networks, etc., more data is in the form of graphs. Locating occurrences of a query graph in a large database graph is an important research topic. Due to the existence of noise (e.g., missing edges) in the large database graph, we investigate the problem of approximate subgraph indexing, i.e., finding the occurrences of a query graph in a large database graph with (possible) missing edges. The SAPPER method is proposed to solve this problem. Utilizing the hybrid neighborhood unit structures in the index, SAPPER takes advantage of pre-generated random spanning trees and a carefully designed graph enumeration order. Real and synthetic data sets are employed to demonstrate the efficiency and scalability of our approximate subgraph indexing method.",
"Given a query graph @math q and a data graph @math G, subgraph similarity matching is to retrieve all matches of @math q in @math G with the number of missing edges bounded by a given threshold @math ∈. Many works have been conducted to study the problem of subgraph similarity matching due to its ability to handle applications involved with noisy or erroneous graph data. In practice, a data graph can be extremely large, e.g., a web-scale graph containing hundreds of millions of vertices and billions of edges. The state-of-the-art approaches employ centralized algorithms to process the subgraph similarity queries, and thus, they are infeasible for such a large graph due to the limited computational power and storage space of a centralized server. To address this problem, in this paper, we investigate subgraph similarity matching for a web-scale graph deployed in a distributed environment. We propose distributed algorithms and optimization techniques that exploit the properties of subgraph similarity matching, so that we can well utilize the parallel computing power and lower the communication cost among the distributed data centers for query processing. Specifically, we first relax and decompose @math q into a minimum number of sub-queries. Next, we send each sub-query to conduct the exact matching in parallel. Finally, we schedule and join the exact matches to obtain final query answers. Moreover, our workload-balance strategy further speeds up the query processing. Our experimental results demonstrate the feasibility of our proposed approach in performing subgraph similarity matching over web-scale graph data.",
"",
"Graph has become increasingly important in modelling complicated structures and schemaless data such as proteins, chemical compounds, and XML documents. Given a graph query, it is desirable to retrieve graphs quickly from a large database via graph-based indices. In this paper, we investigate the issues of indexing graphs and propose a novel solution by applying a graph mining technique. Different from the existing path-based methods, our approach, called gIndex, makes use of frequent substructure as the basic indexing feature. Frequent substructures are ideal candidates since they explore the intrinsic characteristics of the data and are relatively stable to database updates. To reduce the size of index structure, two techniques, size-increasing support constraint and discriminative fragments, are introduced. Our performance study shows that gIndex has 10 times smaller index size, but achieves 3--10 times better performance in comparison with a typical path-based method, GraphGrep. The gIndex approach not only provides and elegant solution to the graph indexing problem, but also demonstrates how database indexing and query processing can benefit form data mining, especially frequent pattern mining. Furthermore, the concepts developed here can be applied to indexing sequences, trees, and other complicated structures as well.",
"Modern search engines answer keyword-based queries extremely efficiently. The impressive speed is due to clever inverted index structures, caching, a domain-independent knowledge of strings, and thousands of machines. Several research efforts have attempted to generalize keyword search to keytree and keygraph searching, because trees and graphs have many applications in next-generation database systems. This paper surveys both algorithms and applications, giving some emphasis to our own work.",
"Complex social and information network search becomes important with a variety of applications. In the core of these applications, lies a common and critical problem: Given a labeled network and a query graph, how to efficiently search the query graph in the target network. The presence of noise and the incomplete knowledge about the structure and content of the target network make it unrealistic to find an exact match. Rather, it is more appealing to find the top-k approximate matches. In this paper, we propose a neighborhood-based similarity measure that could avoid costly graph isomorphism and edit distance computation. Under this new measure, we prove that subgraph similarity search is NP hard, while graph similarity match is polynomial. By studying the principles behind this measure, we found an information propagation model that is able to convert a large network into a set of multidimensional vectors, where sophisticated indexing and similarity search algorithms are available. The proposed method, called Ness (Neighborhood Based Similarity Search), is appropriate for graphs with low automorphism and high noise, which are common in many social and information networks. Ness is not only efficient, but also robust against structural noise and information loss. Empirical results show that it can quickly and accurately find high-quality matches in large networks, with negligible cost.",
"Currently, a huge amount of biological data can be naturally represented by graphs, e.g., protein interaction networks, gene regulatory networks, etc. The need for indexing large graphs is an urgent research problem of great practical importance. The main challenge is size. Each graph may contain thousands (or more) vertices. Most of the previous work focuses on indexing a set of small or medium sized database graphs (with only tens of vertices) and finding whether a query graph occurs in any of these. In this paper, we are interested in finding all the matches of a query graph in a given large graph of thousands of vertices, which is a very important task in many biological applications. This increases the complexity significantly. We propose a novel distance measurement which reintroduces the idea of frequent substructures in a single large graph. We devise the novel structure distance based approach (GADDI) to efficiently find matches of the query graph. GADDI is further optimized by the use of a dynamic matching scheme to minimize redundant calculations. Last but not least, a number of real and synthetic data sets are used to evaluate the efficiency and scalability of our proposed method.",
"Network querying is a growing domain with vast applications ranging from screening compounds against a database of known molecules to matching sub-networks across species. Graph indexing is a powerful method for searching a large database of graphs. Most graph indexing methods to date tackle the exact matching (isomorphism) problem, limiting their applicability to specific instances in which such matches exist. Here we provide a novel graph indexing method to cope with the more general, inexact matching problem. Our method, SIGMA, builds on approximating a variant of the set-cover problem that concerns overlapping multi-sets. We extensively test our method and compare it to a baseline method and to the state-of-the-art Grafil. We show that SIGMA outperforms both, providing higher pruning power in all the tested scenarios.",
"It is increasingly common to find real-life data represented as networks of labeled, heterogeneous entities. To query these networks, one often needs to identify the matches of a given query graph in a (typically large) network modeled as a target graph. Due to noise and the lack of fixed schema in the target graph, the query graph can substantially differ from its matches in the target graph in both structure and node labels, thus bringing challenges to the graph querying tasks. In this paper, we propose NeMa (Network Match), a neighborhood-based subgraph matching technique for querying real-life networks. (1) To measure the quality of the match, we propose a novel subgraph matching cost metric that aggregates the costs of matching individual nodes, and unifies both structure and node label similarities. (2) Based on the metric, we formulate the minimum cost subgraph matching problem. Given a query graph and a target graph, the problem is to identify the (top-k) matches of the query graph with minimum costs in the target graph. We show that the problem is NP-hard, and also hard to approximate. (3) We propose a heuristic algorithm for solving the problem based on an inference model. In addition, we propose optimization techniques to improve the efficiency of our method. (4) We empirically verify that NeMa is both effective and efficient compared to the keyword search and various state-of-the-art graph querying techniques.",
"Structured data including sets, sequences, trees and graphs, pose significant challenges to fundamental aspects of data management such as efficient storage, indexing, and similarity search. With the fast accumulation of graph databases, similarity search in graph databases has emerged as an important research topic. Graph similarity search has applications in a wide range of domains including cheminformatics, bioinformatics, sensor network management, social network management, and XML documents, among others. Most of the current graph indexing methods focus on subgraph query processing, i.e. determining the set of database graphs that contains the query graph and hence do not directly support similarity search. In data mining and machine learning, various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models for supervised learning, graph kernel functions have (i) high computational complexity and (ii) non-trivial difficulty to be indexed in a graph database. Our objective is to bridge graph kernel function and similarity search in graph databases by proposing (i) a novel kernel-based similarity measurement and (ii) an efficient indexing structure for graph data management. Our method of similarity measurement builds upon local features extracted from each node and their neighboring nodes in graphs. A hash table is utilized to support efficient storage and fast search of the extracted local features. Using the hash table, a graph kernel function is defined to capture the intrinsic similarity of graphs and for fast similarity query processing. We have implemented our method, which we have named G-hash, and have demonstrated its utility on large chemical graph databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Most importantly, the new similarity measurement and the index structure is scalable to large database with smaller indexing size, faster indexing construction time, and faster query processing time as compared to state-of-the-art indexing methods such as C-tree, gIndex, and GraphGrep."
]
} |
1512.04839 | 2621197517 | Motivated by applications in graph drawing and information visualization, we examine the planar split thickness of a graph, that is, the smallest k such that the graph is k-splittable into a planar graph. A k-split operation substitutes a vertex v by at most k new vertices such that each neighbor of v is connected to at least one of the new vertices. We first examine the planar split thickness of complete graphs, complete bipartite graphs, multipartite graphs, bounded degree graphs, and genus-1 graphs. We then prove that it is NP-hard to recognize graphs that are 2-splittable into a planar graph, and show that one can approximate the planar split thickness of a graph within a constant factor. If the treewidth is bounded, then we can even verify k-splittability in linear time, for a constant k. | The problem of determining the planar split thickness of a graph @math is related to the graph thickness @cite_14 , empire-map @cite_21 , @math -splitting @cite_1 and planar emulator @cite_2 problems. The of a graph @math is the minimum integer @math such that @math admits an edge-partition into @math planar subgraphs. One can assume that these planar subgraphs are obtained by applying a @math -split operation at each vertex. Hence, thickness is an upper bound on the planar split thickness, e.g., the thickness and thus the planar split thickness of graphs with treewidth @math and maximum-degree-4 is at most @math @cite_4 and 2 @cite_33 , respectively. Analogously, the planar split thickness of a graph is bounded by its , that is, the minimum number of forests into which its edges can be partitioned. We will later show that both parameters also provide an asymptotic lower bound on the planar split thickness. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_33",
"@cite_21",
"@cite_1",
"@cite_2"
],
"mid": [
"2322503310",
"2070606919",
"2025485890",
"1985000598",
"2111635834",
"2049547185"
],
"abstract": [
"",
"Consider a drawing of a graph G in the plane such that crossing edges are coloured differently. The minimum number of colours, taken over all drawings of G, is the classical graph parameter thickness. By restricting the edges to be straight, we obtain the geometric thickness. By additionally restricting the vertices to be in convex position, we obtain the book thickness. This paper studies the relationship between these parameters and treewidth. Our first main result states that for graphs of treewidth k, the maximum thickness and the maximum geometric thickness both equal ⌈k 2⌉. This says that the lower bound for thickness can be matched by an upper bound, even in the more restrictive geometric setting. Our second main result states that for graphs of treewidth k, the maximum book thickness equals k if k ≤ 2 and equals k + 1 if k ≥ 3. This refutes a conjecture of Ganley and Heath [Discrete Appl. Math. 109(3):215-221, 2001]. Analogous results are proved for outerthickness, arboricity, and star-arboricity.",
"We prove that the geometric thickness of graphs whose maximum degree is no more than four is two. All of our algorithms run in O(n) time, where n is the number of vertices in the graph. In our proofs, we present an embedding algorithm for graphs with maximum degree three that uses an n x n grid and a more complex algorithm for embedding a graph with maximum degree four. We also show a variation using orthogonal edges for maximum degree four graphs that also uses an n x n grid. The results have implications in graph theory, graph drawing, and VLSI design.",
"The following statement is not true: \"Every map can have one of four colors assigned to each country so that every pair of countries with a border arc in common receives different colors.\" But isn't this the statement of the famous Four Color Theorem that was proved about 15 years ago using lots of computer checking? Has a flaw been found in that massive piece of work? FIGURE 1 shows a small example of a map that needs five colors if every pair of adjacent countries is to receive different colors. The important feature is that one country (#5) is a disconnected country of two regions.",
"Given a nite, undirected, simple graph G, we are concerned with operations on G that transform it into a planar graph. We give a survey of results about such operations and related graph parameters. While there are many algorithmic results about planarization through edge deletion, the results about vertex splitting, thickness, and crossing number are mostly of a structural nature. We also include a brief section on vertex deletion. We do not consider parallel algorithms, nor do we deal with on-line algorithms.",
"We investigate the question of which graphs have planar emulators (a locally-surjective homomorphism from some finite planar graph)-a problem raised already in [email protected]? thesis (1985) and conceptually related to the better known planar cover conjecture by Negami (1986). For over two decades, the planar emulator problem lived poorly in a shadow of [email protected]?s conjecture-which is still open-as the two were considered equivalent. But, at the end of 2008, a surprising construction by Rieck and Yamashita falsified the natural ''planar emulator conjecture'', and thus opened a whole new research field. We present further results and constructions which show how far the planar-emulability concept is from planar-coverability, and that the traditional idea of likening it to projective embeddability is actually very out-of-place. We also present several positive partial characterizations of planar-emulable graphs."
]
} |
1512.04839 | 2621197517 | Motivated by applications in graph drawing and information visualization, we examine the planar split thickness of a graph, that is, the smallest k such that the graph is k-splittable into a planar graph. A k-split operation substitutes a vertex v by at most k new vertices such that each neighbor of v is connected to at least one of the new vertices. We first examine the planar split thickness of complete graphs, complete bipartite graphs, multipartite graphs, bounded degree graphs, and genus-1 graphs. We then prove that it is NP-hard to recognize graphs that are 2-splittable into a planar graph, and show that one can approximate the planar split thickness of a graph within a constant factor. If the treewidth is bounded, then we can even verify k-splittability in linear time, for a constant k. | A rich body of literature considers the planarization of non-planar graphs via @cite_13 @cite_9 @cite_1 @cite_19 . Here a is one of our 2-split operations. These results focus on minimizing the , i.e., the total number of vertex splits to obtain a planar graph. Tight bounds on the splitting number are known for complete graphs @cite_9 and complete bipartite graphs @cite_22 @cite_15 , but for general graphs, the problem of determining the splitting number of a graph is NP-hard @cite_13 . Note that upper bounding the splitting number does not necessarily guarantee any good upper bound on the planar split thickness, e.g., see . | {
"cite_N": [
"@cite_22",
"@cite_9",
"@cite_1",
"@cite_19",
"@cite_15",
"@cite_13"
],
"mid": [
"2064977763",
"2065149403",
"2111635834",
"",
"",
"1536302196"
],
"abstract": [
"Soit un graphe H, on construit un nouveau graphe G avec un sommet de moins en remplacant deux sommets non adjacents u et v de H par un seul sommet qui est adjacent a tous les sommets de H qui sont adjacents avec soit u soit v. On dit que G s'obtient a partir de H par une identification de sommet. On definit le nombre de composition sp(G) d'un graphe G comme etant le plus petit nombre k tel que G puisse s'obtenir d'un graphe planaire par k identifications de sommet",
"If a given graphG can be obtained bys vertex identifications from a suitable planar graph ands is the minimum number for which this is possible thens is called the splitting number ofG. Here a formula for the splitting number of the complete graph is derived.",
"Given a nite, undirected, simple graph G, we are concerned with operations on G that transform it into a planar graph. We give a survey of results about such operations and related graph parameters. While there are many algorithmic results about planarization through edge deletion, the results about vertex splitting, thickness, and crossing number are mostly of a structural nature. We also include a brief section on vertex deletion. We do not consider parallel algorithms, nor do we deal with on-line algorithms.",
"",
"",
"We consider two graph invariants that are used as a measure of nonplanarity: the splitting number of a graph and the size of a maximum planar subgraph. The splitting number of a graph G is the smallest integer k ≥ 0, such that a planar graph can be obtained from G by k splitting operations. Such operation replaces a vertex v by two nonadjacent vertices v1 and v2, and attaches the neighbors of v either to v1 or to v2. We prove that the splitting number decision problem is NP-complete, even when restricted to cubic graphs. We obtain as a consequence that planar subgraph remains NP-complete when restricted to cubic graphs. Note that NP-completeness for cubic graphs also implies NP-completeness for graphs not containing a subdivision of K5 as a subgraph."
]
} |
1512.04839 | 2621197517 | Motivated by applications in graph drawing and information visualization, we examine the planar split thickness of a graph, that is, the smallest k such that the graph is k-splittable into a planar graph. A k-split operation substitutes a vertex v by at most k new vertices such that each neighbor of v is connected to at least one of the new vertices. We first examine the planar split thickness of complete graphs, complete bipartite graphs, multipartite graphs, bounded degree graphs, and genus-1 graphs. We then prove that it is NP-hard to recognize graphs that are 2-splittable into a planar graph, and show that one can approximate the planar split thickness of a graph within a constant factor. If the treewidth is bounded, then we can even verify k-splittability in linear time, for a constant k. | Knauer and Ueckerdt @cite_29 studied the which is equivalent to our problem and stated several results for splitting graphs into a star forest, a caterpillar forest, or an interval graph. They showed that planar graphs are 4-splittable into a star forest, and planar bipartite graphs as well as outerplanar graphs are 3-splittable into a star forest. It follows from Scheinerman and West @cite_3 that planar graphs are 3-splittable into an interval graph and 4-splittable into a caterpillar forest, while outerplanar graphs are 2-splittable into an interval graph. | {
"cite_N": [
"@cite_29",
"@cite_3"
],
"mid": [
"2139849594",
"2071261821"
],
"abstract": [
"We consider the problem of covering an input graph H with graphs from a fixed covering class G . The classical covering number of H with respect to G is the minimum number of graphs from G needed to cover the edges of H without covering non-edges of H . We introduce a unifying notion of three covering parameters with respect to G , two of which are novel concepts only considered in special cases before: the local and the folded covering number. Each parameter measures \"how far\" H is from G in a different way. Whereas the folded covering number has been investigated thoroughly for some covering classes, e.g.,?interval graphs and planar graphs, the local covering number has received little attention.We provide new bounds on each covering number with respect to the following covering classes: linear forests, star forests, caterpillar forests, and interval graphs. The classical graph parameters that result this way are interval number, track number, linear arboricity, star arboricity, and caterpillar arboricity. As input graphs we consider graphs of bounded degeneracy, bounded degree, bounded tree-width or bounded simple tree-width, as well as outerplanar, planar bipartite, and planar graphs. For several pairs of an input class and a covering class we determine exactly the maximum ordinary, local, and folded covering number of an input graph with respect to that covering class.",
"Abstract Suppose each vertex of a graph G is assigned a subset of the real line consisting of at most t closed intervals. This assignment is called a t -interval representation of G when vertex v is adjacent to vertex w if and only if some interval for v intersects some interval for w . The interval number i ( G ) of a graph G is the smallest number t such that G has a t -interval representation. It is proved that i ( G ) ≤ 3 whenever G is planar and that this bound is the best possible. The related concepts of displayed interval number and depth- r interval number are discussed and their maximum values for certain classes of planar graphs are found."
]
} |
1512.05059 | 2266368229 | Kernel principal component analysis (KPCA) provides a concise set of basis vectors which capture non-linear structures within large data sets, and is a central tool in data analysis and learning. To allow for non-linear relations, typically a full @math kernel matrix is constructed over @math data points, but this requires too much space and time for large values of @math . Techniques such as the Nystr "om method and random feature maps can help towards this goal, but they do not explicitly maintain the basis vectors in a stream and take more space than desired. We propose a new approach for streaming KPCA which maintains a small set of basis elements in a stream, requiring space only logarithmic in @math , and also improves the dependence on the error parameter. Our technique combines together random feature maps with recent advances in matrix sketching, it has guaranteed spectral norm error bounds with respect to the original kernel matrix, and it compares favorably in practice to state-of-the-art approaches. | Techniques of this group update augment the eigenspace of kernel PCA without storing all training data. @cite_18 adapted incremental PCA @cite_24 to maintain a set of linearly independent training data points and compute top @math eigenvectors such that they preserve a @math -fraction (for a threshold @math ) of the total energy of the eigenspace. However this method suffers from two major drawbacks. First, the set of linearly independent data points can grow large and unpredictably, perhaps exceeding the capacity of the memory. Second, under adversarial (or structured sparse) data, intermediate approximations of the eigenspace can compound in error, giving bad performance @cite_9 . Some of these issues can be addressed using online regret analysis assuming incoming data is drawn iid (e.g., @cite_25 ). However, in the adversarial settings we consider, FD @cite_0 can be seen as the right way to formally address these issues. | {
"cite_N": [
"@cite_18",
"@cite_9",
"@cite_24",
"@cite_0",
"@cite_25"
],
"mid": [
"2107275435",
"2233419371",
"",
"2951542269",
"2059774588"
],
"abstract": [
"In this paper, a feature extraction method for online classification problems is presented by extending Kernel principal component analysis (KPCA). The proposed incremental KPCA (IKPCA) constructs a nonlinear high-dimensional feature space incrementally by not only updating eigen-axes but also adding new eigen-axes. The augmentation of a new eigen-axis is carried out when the accumulation ratio falls below a threshold value. We mathematically derive the incremental update equations of eigen-axes and the accumulation ratio without keeping all training samples. From the experimental results, we conclude that the proposed IKPCA works well as an incremental learning algorithm of a feature space in the sense that a minimum number of axes are augmented to maintain a designated accumulation ratio, and that the eigenvectors with major eigenvalues can converge closely to those of the batch type of KPCA. In addition, the recognition accuracy of IKPCA is similar to or slightly better than that of KPCA",
"Matrices have become essential data representations for many large-scale problems in data analytics, and hence matrix sketching is a critical task. Although much research has focused on improving the error size tradeoff under various sketching paradigms, the many forms of error bounds make these approaches hard to compare in theory and in practice. This paper attempts to categorize and compare the most known methods under row-wise streaming updates with provable guarantees, and then to tweak some of these methods to gain practical improvements while retaining guarantees. For instance, we observe that a simple heuristic iSVD, with no guarantees, tends to outperform all known approaches in terms of size error trade-off. We modify the best performing method with guarantees, FrequentDirections , under the size error trade-off to match the performance of iSVD and retain its guarantees. We also demonstrate some adversarial datasets where iSVD performs quite poorly. In comparing techniques in the time error trade-off, techniques based on hashing or sampling tend to perform better. In this setting, we modify the most studied sampling regime to retain error guarantee but obtain dramatic improvements in the time error trade-off. Finally, we provide easy replication of our studies on APT, a new testbed which makes available not only code and datasets, but also a computing platform with fixed environmental settings.",
"",
"We adapt a well known streaming algorithm for approximating item frequencies to the matrix sketching setting. The algorithm receives the rows of a large matrix @math one after the other in a streaming fashion. It maintains a sketch matrix @math such that for any unit vector @math [ |Ax |^2 |Bx |^2 |Ax |^2 - |A |_ f ^2 .] Sketch updates per row in @math require @math operations in the worst case. A slight modification of the algorithm allows for an amortized update time of @math operations per row. The presented algorithm stands out in that it is: deterministic, simple to implement, and elementary to prove. It also experimentally produces more accurate sketches than widely used approaches while still being computationally competitive.",
"A number of updates for density matrices have been developed recently that are motivated by relative entropy minimization problems. The updates involve a softmin calculation based on matrix logs and matrix exponentials. We show that these updates can be kernelized. This is important because the bounds provable for these algorithms are logarithmic in the feature dimension (provided that the 2-norm of feature vectors is bounded by a constant). The main problem we focus on is the kernelization of an online PCA algorithm which belongs to this family of updates."
]
} |
1512.05059 | 2266368229 | Kernel principal component analysis (KPCA) provides a concise set of basis vectors which capture non-linear structures within large data sets, and is a central tool in data analysis and learning. To allow for non-linear relations, typically a full @math kernel matrix is constructed over @math data points, but this requires too much space and time for large values of @math . Techniques such as the Nystr "om method and random feature maps can help towards this goal, but they do not explicitly maintain the basis vectors in a stream and take more space than desired. We propose a new approach for streaming KPCA which maintains a small set of basis elements in a stream, requiring space only logarithmic in @math , and also improves the dependence on the error parameter. Our technique combines together random feature maps with recent advances in matrix sketching, it has guaranteed spectral norm error bounds with respect to the original kernel matrix, and it compares favorably in practice to state-of-the-art approaches. | m-Based Methods for Kernel PCA. Another group of methods @cite_1 @cite_22 @cite_3 @cite_6 @cite_23 , known as , approximate the kernel (gram) matrix @math with a low-rank matrix @math , by sampling columns of @math . The original version @cite_1 samples @math columns with replacement as @math and estimates @math , where @math is the intersection of the sampled columns and rows; this method takes @math time and is not streaming. Later @cite_22 used sampling with replacement and approximated @math as @math . They proved if sampling probabilities are of form @math , then for @math and @math , a Frobenius error bound @math holds with probability @math for @math , and a spectral error bound @math holds with probability @math for @math samples. There exist conditional improvements, e.g., @cite_3 shows with @math where @math denotes the coherence of the top @math -dimensional eigenspace of @math , that @math . | {
"cite_N": [
"@cite_22",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_23"
],
"mid": [
"2160840682",
"2112545207",
"2949526110",
"",
"1483387558"
],
"abstract": [
"A problem for many kernel-based methods is that the amount of computation required to find the solution scales as O(n3), where n is the number of training examples. We develop and analyze an algorithm to compute an easily-interpretable low-rank approximation to an n × n Gram matrix G such that computations of interest may be performed more rapidly. The approximation is of the form Gk = CWk+CT, where C is a matrix consisting of a small number c of columns of G and Wk is the best rank-k approximation to W, the matrix formed by the intersection between those c columns of G and the corresponding c rows of G. An important aspect of the algorithm is the probability distribution used to randomly sample the columns; we will use a judiciously-chosen and data-dependent nonuniform probability distribution. Let ||·||2 and ||·||F denote the spectral norm and the Frobenius norm, respectively, of a matrix, and let Gk be the best rank-k approximation to G. We prove that by choosing O(k e4) columns||G-CWk+CT||ξ ≤ ||G-Gk||ξ + e Σi=1n Gii2 ,both in expectation and with high probability, for both ξ = 2, F, and for all k: 0 ≤ k ≤ rank(W). This approximation can be computed using O(n) additional space and time, after making two passes over the data from external storage. The relationships between this algorithm, other related matrix decompositions, and the Nystrom method from integral equation theory are discussed.",
"A major problem for kernel-based predictors (such as Support Vector Machines and Gaussian processes) is that the amount of computation required to find the solution scales as O(n3), where n is the number of training examples. We show that an approximation to the eigendecomposition of the Gram matrix can be computed by the Nystrom method (which is used for the numerical solution of eigenproblems). This is achieved by carrying out an eigendecomposition on a smaller system of size m < n, and then expanding the results back up to n dimensions. The computational complexity of a predictor using this approximation is O(m2n). We report experiments on the USPS and abalone data sets and show that we can set m ≪ n without any significant decrease in the accuracy of the solution.",
"We reconsider randomized algorithms for the low-rank approximation of symmetric positive semi-definite (SPSD) matrices such as Laplacian and kernel matrices that arise in data analysis and machine learning applications. Our main results consist of an empirical evaluation of the performance quality and running time of sampling and projection methods on a diverse suite of SPSD matrices. Our results highlight complementary aspects of sampling versus projection methods; they characterize the effects of common data preprocessing steps on the performance of these algorithms; and they point to important differences between uniform sampling and nonuniform sampling methods based on leverage scores. In addition, our empirical results illustrate that existing theory is so weak that it does not provide even a qualitative guide to practice. Thus, we complement our empirical results with a suite of worst-case theoretical bounds for both random sampling and random projection methods. These bounds are qualitatively superior to existing bounds---e.g. improved additive-error bounds for spectral and Frobenius norm error and relative-error bounds for trace norm error---and they point to future directions to make these algorithms useful in even larger-scale machine learning applications.",
"",
"The Nystrom method is an efficient technique used to speed up large-scale learning applications by generating low-rank approximations. Crucial to the performance of this technique is the assumption that a matrix can be well approximated by working exclusively with a subset of its columns. In this work we relate this assumption to the concept of matrix coherence, connecting coherence to the performance of the Nystrom method. Making use of related work in the compressed sensing and the matrix completion literature, we derive novel coherence-based bounds for the Nystrom method in the low-rank setting. We then present empirical results that corroborate these theoretical bounds. Finally, we present more general empirical results for the full-rank setting that convincingly demonstrate the ability of matrix coherence to measure the degree to which information can be extracted from a subset of columns."
]
} |
1512.05059 | 2266368229 | Kernel principal component analysis (KPCA) provides a concise set of basis vectors which capture non-linear structures within large data sets, and is a central tool in data analysis and learning. To allow for non-linear relations, typically a full @math kernel matrix is constructed over @math data points, but this requires too much space and time for large values of @math . Techniques such as the Nystr "om method and random feature maps can help towards this goal, but they do not explicitly maintain the basis vectors in a stream and take more space than desired. We propose a new approach for streaming KPCA which maintains a small set of basis elements in a stream, requiring space only logarithmic in @math , and also improves the dependence on the error parameter. Our technique combines together random feature maps with recent advances in matrix sketching, it has guaranteed spectral norm error bounds with respect to the original kernel matrix, and it compares favorably in practice to state-of-the-art approaches. | In this line of work, the kernel matrix is approximated via randomized feature maps. The seminal work of @cite_8 showed one can construct randomized feature maps @math such that for any shift-invariant kernel @math and all @math , @math and if @math , then with probability at least @math , @math . Using this mapping, instead of lifting data points to @math by the kernel trick, they embed the data to a low-dimensional Euclidean inner product space. Subsequent works generalized to other kernel functions such as group invariant kernels @cite_21 , min intersection kernels @cite_17 , dot-product kernels @cite_16 , and polynomial kernels @cite_12 @cite_10 . This essentially converts kernel PCA to linear PCA. In particular, Lopez al @cite_15 proposed , which is an exact linear PCA on the approximate data feature maps matrix @math . They showed the approximation error is bounded as @math , where @math is not actually constructed. | {
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2144902422",
"1751437809",
"2949583171",
"2105527258",
"2151683004",
"2158228027",
""
],
"abstract": [
"To accelerate the training of kernel machines, we propose to map the input data to a randomized low-dimensional feature space and then apply existing fast linear methods. The features are designed so that the inner products of the transformed data are approximately equal to those in the feature space of a user specified shift-invariant kernel. We explore two sets of random features, provide convergence bounds on their ability to approximate various radial basis kernels, and show that in large-scale classification and regression tasks linear machine learning algorithms applied to these features outperform state-of-the-art large-scale kernel machines.",
"Approximations based on random Fourier features have recently emerged as an efficient and elegant methodology for designing large-scale kernel machines [4]. By expressing the kernel as a Fourier expansion, features are generated based on a finite set of random basis projections with inner products that are Monte Carlo approximations to the original kernel. However, the original Fourier features are only applicable to translation-invariant kernels and are not suitable for histograms that are always non-negative. This paper extends the concept of translation-invariance and the random Fourier feature methodology to arbitrary, locally compact Abelian groups. Based on empirical observations drawn from the exponentiated χ2 kernel, the state-of-the-art for histogram descriptors, we propose a new group called the skewed-multiplicative group and design translation-invariant kernels on it. Experiments show that the proposed kernels outperform other kernels that can be similarly approximated. In a semantic segmentation experiment on the PASCAL VOC 2009 dataset, the approximation allows us to train large-scale learning machines more than two orders of magnitude faster than previous nonlinear SVMs.",
"Classical methods such as Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA) are ubiquitous in statistics. However, these techniques are only able to reveal linear relationships in data. Although nonlinear variants of PCA and CCA have been proposed, these are computationally prohibitive in the large scale. In a separate strand of recent research, randomized methods have been proposed to construct features that help reveal nonlinear patterns in data. For basic tasks such as regression or classification, random features exhibit little or no loss in performance, while achieving drastic savings in computational requirements. In this paper we leverage randomness to design scalable new variants of nonlinear PCA and CCA; our ideas extend to key multivariate analysis tools such as spectral clustering or LDA. We demonstrate our algorithms through experiments on real-world data, on which we compare against the state-of-the-art. A simple R implementation of the presented algorithms is provided.",
"Approximating non-linear kernels using feature maps has gained a lot of interest in recent years due to applications in reducing training and testing times of SVM classifiers and other kernel based learning algorithms. We extend this line of work and present low distortion embeddings for dot product kernels into linear Euclidean spaces. We base our results on a classical result in harmonic analysis characterizing all dot product kernels and use it to define randomized feature maps into explicit low dimensional Euclidean spaces in which the native dot product provides an approximation to the dot product kernel with high confidence.",
"Sketching is a powerful dimensionality reduction tool for accelerating statistical learning algorithms. However, its applicability has been limited to a certain extent since the crucial ingredient, the so-called oblivious subspace embedding, can only be applied to data spaces with an explicit representation as the column span or row span of a matrix, while in many settings learning is done in a high-dimensional space implicitly defined by the data matrix via a kernel transformation. We propose the first fast oblivious subspace embeddings that are able to embed a space induced by a non-linear kernel without explicitly mapping the data to the high-dimensional space. In particular, we propose an embedding for mappings induced by the polynomial kernel. Using the subspace embeddings, we obtain the fastest known algorithms for computing an implicit low rank approximation of the higher-dimension mapping of the data matrix, and for computing an approximate kernel PCA of the data, as well as doing approximate kernel principal component regression.",
"Kernel approximation using randomized feature maps has recently gained a lot of interest. In this work, we identify that previous approaches for polynomial kernel approximation create maps that are rank deficient, and therefore do not utilize the capacity of the projected feature space effectively. To address this challenge, we propose compact random feature maps (CRAFTMaps) to approximate polynomial kernels more concisely and accurately. We prove the error bounds of CRAFTMaps demonstrating their superior kernel reconstruction performance compared to the previous approximation schemes. We show how structured random matrices can be used to efficiently generate CRAFTMaps, and present a single-pass algorithm using CRAFTMaps to learn non-linear multi-class classifiers. We present experiments on multiple standard data-sets with performance competitive with state-of-the-art results.",
""
]
} |
1512.04701 | 2199794584 | In this paper, we aim to develop a method for automatically detecting and tracking topics in broadcast news. We present a hierarchical And-Or graph (AOG) to jointly represent the latent structure of both texts and visuals. The AOG embeds a context sensitive grammar that can describe the hierarchical composition of news topics by semantic elements about people involved, related places and what happened, and model contextual relationships between elements in the hierarchy. We detect news topics through a cluster sampling process which groups stories about closely related events. Swendsen-Wang Cuts (SWC), an effective cluster sampling algorithm, is adopted for traversing the solution space and obtaining optimal clustering solutions by maximizing a Bayesian posterior probability. Topics are tracked to deal with the continuously updated news streams. We generate topic trajectories to show how topics emerge, evolve and disappear over time. The experimental results show that our method can explicitly describe the textual and visual data in news videos and produce meaningful topic trajectories. Our method achieves superior performance compared to state-of-the-art methods on both a public dataset Reuters-21578 and a self-collected dataset named UCLA Broadcast News Dataset. | Even though these methods are effective in general topic modeling, they can hardly achieve good performance in the news domain using only the bag-of-words (BoW) representation. The BoW representation is computationally efficient, but it ignores the compositional structures, which are important for news analyses. News stories are generally driven by events, so information from aspects like "who", "where" and "what" is crucial for summarizing these stories and generating meaningful news topics. @cite_64 considered these aspects but included them as a whole. @cite_7 used information from the above aspects in their representation. However, they assume that these aspects are independent, which is generally not true in the real news data. | {
"cite_N": [
"@cite_64",
"@cite_7"
],
"mid": [
"1712618182",
"2066727938"
],
"abstract": [
"We develop the syntactic topic model (STM), a nonparametric Bayesian model of parsed documents. The STM generates words that are both thematically and syntactically constrained, which combines the semantic insights of topic models with the syntactic information available from parse trees. Each word of a sentence is generated by a distribution that combines document-specific topic weights and parse-tree-specific syntactic transitions. Words are assumed to be generated in an order that respects the parse tree. We derive an approximate posterior inference method based on variational methods for hierarchical Dirichlet processes, and we report qualitative and quantitative results on both synthetic data and hand-parsed documents.",
"Retrospective news event detection (RED) is defined as the discovery of previously unidentified events in historical news corpus. Although both the contents and time information of news articles are helpful to RED, most researches focus on the utilization of the contents of news articles. Few research works have been carried out on finding better usages of time information. In this paper, we do some explorations on both directions based on the following two characteristics of news articles. On the one hand, news articles are always aroused by events; on the other hand, similar articles reporting the same event often redundantly appear on many news sources. The former hints a generative model of news articles, and the latter provides data enriched environments to perform RED. With consideration of these characteristics, we propose a probabilistic model to incorporate both content and time information in a unified framework. This model gives new representations of both news articles and news events. Furthermore, based on this approach, we build an interactive RED system, HISCOVERY, which provides additional functions to present events, Photo Story and Chronicle."
]
} |
1512.04701 | 2199794584 | In this paper, we aim to develop a method for automatically detecting and tracking topics in broadcast news. We present a hierarchical And-Or graph (AOG) to jointly represent the latent structure of both texts and visuals. The AOG embeds a context sensitive grammar that can describe the hierarchical composition of news topics by semantic elements about people involved, related places and what happened, and model contextual relationships between elements in the hierarchy. We detect news topics through a cluster sampling process which groups stories about closely related events. Swendsen-Wang Cuts (SWC), an effective cluster sampling algorithm, is adopted for traversing the solution space and obtaining optimal clustering solutions by maximizing a Bayesian posterior probability. Topics are tracked to deal with the continuously updated news streams. We generate topic trajectories to show how topics emerge, evolve and disappear over time. The experimental results show that our method can explicitly describe the textual and visual data in news videos and produce meaningful topic trajectories. Our method achieves superior performance compared to state-of-the-art methods on both a public dataset Reuters-21578 and a self-collected dataset named UCLA Broadcast News Dataset. | Some work also combined topic modeling and document clustering together, such as the multi-grain clustering topic model (MGCTM) proposed by @cite_15 . They showed that these two tasks are closely related and can help each other as both performances are improved. This work still remains in the pure text domain and uses the BoW representation. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2964291985"
],
"abstract": [
"Document clustering and topic modeling are two closely related tasks which can mutually benefit each other. Topic modeling can project documents into a topic space which facilitates effective document clustering. Cluster labels discovered by document clustering can be incorporated into topic models to extract local topics specific to each cluster and global topics shared by all clusters. In this paper, we propose a multi-grain clustering topic model (MGCTM) which integrates document clustering and topic modeling into a unified framework and jointly performs the two tasks to achieve the overall best performance. Our model tightly couples two components: a mixture component used for discovering latent groups in document collection and a topic model component used for mining multi-grain topics including local topics specific to each cluster and global topics shared across clusters. We employ variational inference to approximate the posterior of hidden variables and learn model parameters. Experiments on two datasets demonstrate the effectiveness of our model."
]
} |
1512.04701 | 2199794584 | In this paper, we aim to develop a method for automatically detecting and tracking topics in broadcast news. We present a hierarchical And-Or graph (AOG) to jointly represent the latent structure of both texts and visuals. The AOG embeds a context sensitive grammar that can describe the hierarchical composition of news topics by semantic elements about people involved, related places and what happened, and model contextual relationships between elements in the hierarchy. We detect news topics through a cluster sampling process which groups stories about closely related events. Swendsen-Wang Cuts (SWC), an effective cluster sampling algorithm, is adopted for traversing the solution space and obtaining optimal clustering solutions by maximizing a Bayesian posterior probability. Topics are tracked to deal with the continuously updated news streams. We generate topic trajectories to show how topics emerge, evolve and disappear over time. The experimental results show that our method can explicitly describe the textual and visual data in news videos and produce meaningful topic trajectories. Our method achieves superior performance compared to state-of-the-art methods on both a public dataset Reuters-21578 and a self-collected dataset named UCLA Broadcast News Dataset. | In the probabilistic modeling community, some models incorporate time information, such as the Dynamic Topic Model (DTM) @cite_38 which can model the topic evolution over time. However, it is assumed that the topics exist throughout the whole time period, which is usually not true especially in broadcast news. It also leads to heavy computation for continuously updated new streams. | {
"cite_N": [
"@cite_38"
],
"mid": [
"2072644219"
],
"abstract": [
"A family of probabilistic time series models is developed to analyze the time evolution of topics in large document collections. The approach is to use state space models on the natural parameters of the multinomial distributions that represent the topics. Variational approximations based on Kalman filters and nonparametric wavelet regression are developed to carry out approximate posterior inference over the latent topics. In addition to giving quantitative, predictive models of a sequential corpus, dynamic topic models provide a qualitative window into the contents of a large document collection. The models are demonstrated by analyzing the OCR'ed archives of the journal Science from 1880 through 2000."
]
} |
1512.04701 | 2199794584 | In this paper, we aim to develop a method for automatically detecting and tracking topics in broadcast news. We present a hierarchical And-Or graph (AOG) to jointly represent the latent structure of both texts and visuals. The AOG embeds a context sensitive grammar that can describe the hierarchical composition of news topics by semantic elements about people involved, related places and what happened, and model contextual relationships between elements in the hierarchy. We detect news topics through a cluster sampling process which groups stories about closely related events. Swendsen-Wang Cuts (SWC), an effective cluster sampling algorithm, is adopted for traversing the solution space and obtaining optimal clustering solutions by maximizing a Bayesian posterior probability. Topics are tracked to deal with the continuously updated news streams. We generate topic trajectories to show how topics emerge, evolve and disappear over time. The experimental results show that our method can explicitly describe the textual and visual data in news videos and produce meaningful topic trajectories. Our method achieves superior performance compared to state-of-the-art methods on both a public dataset Reuters-21578 and a self-collected dataset named UCLA Broadcast News Dataset. | Thus instead of using the previous two methods, we choose to track the news topics by linking topics detected in different time periods and generating topic trajectories over time. Some linking methods, such as those by Mei and Zhai @cite_37 as well as Kim and Oh @cite_63 , are closely related to our topic tracking task. However, the method in @cite_37 is designed for news about some specific topics such as "tsunami". The similarity matrices used in @cite_63 are based on the topics obtained by the original LDA model with the BoW assumption. Moreover, both of these two methods are merely based on textual information. | {
"cite_N": [
"@cite_37",
"@cite_63"
],
"mid": [
"2040466507",
"1518761017"
],
"abstract": [
"Temporal Text Mining (TTM) is concerned with discovering temporal patterns in text information collected over time. Since most text information bears some time stamps, TTM has many applications in multiple domains, such as summarizing events in news articles and revealing research trends in scientific literature. In this paper, we study a particular TTM task -- discovering and summarizing the evolutionary patterns of themes in a text stream. We define this new text mining problem and present general probabilistic methods for solving this problem through (1) discovering latent themes from text; (2) constructing an evolution graph of themes; and (3) analyzing life cycles of themes. Evaluation of the proposed methods on two different domains (i.e., news articles and literature) shows that the proposed methods can discover interesting evolutionary theme patterns effectively.",
"The Web is a great resource and archive of news articles for the world. We present a framework, based on probabilistic topic modeling, for uncovering the meaningful structure and trends of important topics and issues hidden within the news archives on the Web. Central in the framework is a topic chain, a temporal organization of similar topics. We experimented with various topic similarity metrics and present our insights on how best to construct topic chains. We discuss how to interpret the topic chains to understand the news corpus by looking at long-term topics, temporary issues, and shifts of focus in the topic chains. We applied our framework to nine months of Korean Web news corpus and present our findings."
]
} |
1512.04891 | 2953018641 | Pick-and-place regrasp is an important manipulation skill for a robot. It helps a robot accomplish tasks that cannot be achieved within a single grasp, due to constraints such as kinematics or collisions between the robot and the environment. Previous work on pick-and-place regrasp only leveraged flat surfaces for intermediate placements, and thus is limited in the capability to reorient an object. In this paper, we extend the reorientation capability of a pick-and-place regrasp by adding a vertical pin on the working surface and using it as the intermediate location for regrasping. In particular, our method automatically computes the stable placements of an object leaning against a vertical pin, finds several force-closure grasps, generates a graph of regrasp actions, and searches for the regrasp sequence. To compare the regrasping performance with and without using pins, we evaluate the success rate and the length of regrasp sequences while performing tasks on various models. Experiments on reorientation and assembly tasks validate the benefit of using support pins for regrasping. | Solving manipulation problems requires planning a coordinated sequence of motions that involves picking and placing, as well as moving through the free space. Such sequential manipulation problem is challenging due to its high dimensionality. Early work in this area used an explicit graph search to find a sequence of regrasping motions @cite_3 . Most recent approaches are constraint-based. They first formalize the geometric constraints being involved in the manipulation process, e.g., the object must be in a stable pose after being placed down, the relative pose between the manipulator and the object must be fixed during the grasp, and two objects should not collide with each other. Next, they compute a grasp sequence that can satisfy all these constraints. Some methods @cite_21 @cite_18 used these constraints to define a set of interconnected sub-manifolds in the task space, and then computed a solution sequence using probabilistic roadmaps embedded in the constrained space. Other approaches @cite_17 @cite_14 @cite_20 used these constraints to represent sequential manipulation problems as constraint-satisfaction problem (CSP), and then solved the CSP using variants of the backtracking search. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_21",
"@cite_3",
"@cite_20",
"@cite_17"
],
"mid": [
"2041753526",
"2031738727",
"2044995998",
"1481480391",
"",
"2057408106"
],
"abstract": [
"Robots that perform complex manipulation tasks must be able to generate strategies that make and break contact with the object. This requires reasoning in a motion space with a particular multi-modal structure, in which the state contains both a discrete mode (the contact state) and a continuous configuration (the robot and object poses). In this paper we address multi-modal motion planning in the common setting where the state is high-dimensional, and there are a continuous infinity of modes. We present a highly general algorithm, Random-MMP, that repeatedly attempts mode switches sampled at random. A major theoretical result is that Random-MMP is formally reliable and scalable, and its running time depends on certain properties of the multi-modal structure of the problem that are not explicitly dependent on dimensionality. We apply the planner to a manipulation task on the Honda humanoid robot, where the robot is asked to push an object to a desired location on a cluttered table, and the robot is restricted to switch between walking, reaching, and pushing modes. Experiments in simulation and on the real robot demonstrate that Random-MMP solves problem instances that require several carefully chosen pushes in minutes on a PC.",
"In this paper, we describe a strategy for integrated task and motion planning based on performing a symbolic search for a sequence of high-level operations, such as pick, move and place, while postponing geometric decisions. Partial plans (skeletons) in this search thus pose a geometric constraintsatisfaction problem (CSP), involving sequences of placements and paths for the robot, and grasps and locations of objects. We propose a formulation for these problems in a discretized configuration space for the robot. The resulting problems can be solved using existing methods for discrete CSP.",
"This paper deals with motion planning for robots manipulating movable objects among obstacles. We propose a general manipulation planning approach capable of addressing continuous sets for modeling both the possible grasps and the stable placements of the movable object, rather than discrete sets generally assumed by the previous approaches. The proposed algorithm relies on a topological property that characterizes the existence of solutions in the subspace of configurations where the robot grasps the object placed at a stable position. It allows us to devise a manipulation planner that captures in a probabilistic roadmap the connectivity of sub-dimensional manifolds of the composite configuration space. Experiments conducted with the planner in simulated environments demonstrate its efficacy to solve complex manipulation problems.",
"Part 1 Introduction: HANDEY robot programming why is robot programming difficult? what HANDEY is and is not. Part 2 Planning pick-and-place operations: examples of constraint interactions a brief overview of HANDEY the HANDEY planners combining the planners previous work. Part 3 Basics: polyhedral models robot models world models configuration space. Part 4 Gross motion planning: approximating the COs for revolute manipulators slice projections for polyhedral links efficiency considerations searching for paths a massively parallel algorithm. Part 5 Grasp planning: basic assumptions grasp planner overview using depth data in grasp planning the potential-field planner. Part 6 Regrasp planning: grasps placements on a table constructing the legal grasp placement pairs solving the regrasp problem in handey an example computing the constraints regrasping using two parallel-jaw grippers. Part 7 Co-ordinating multiple robots: co-ordination and parallelism robot co-ordination as a scheduling problem the task completion diagram generating the TC diagram more on schedules other issues. Part 8 Conclusion: evolution path planning experimentation what we learned. (Part contents)",
"",
"The combination of task and motion planning presents us with a new problem that we call geometric backtracking. This problem arises from the fact that a single symbolic state or action may be geometrically instantiated in infinitely many ways. When a symbolic action cannot be geometrically validated, we may need to backtrack in the space of geometric configurations, which greatly increases the complexity of the whole planning process. In this paper, we address this problem using intervals to represent geometric configurations, and constraint propagation techniques to shrink these intervals according to the geometric constraints of the problem. After propagation, either (i) the intervals are shrunk, thus reducing the search space in which geometric backtracking may occur, or (ii) the constraints are inconsistent, indicating the non-feasibility of the sequence of actions without further effort. We illustrate our approach on scenarios in which a two-arm robot manipulates a set of objects, and report experiments that show how the search space is reduced."
]
} |
1512.04891 | 2953018641 | Pick-and-place regrasp is an important manipulation skill for a robot. It helps a robot accomplish tasks that cannot be achieved within a single grasp, due to constraints such as kinematics or collisions between the robot and the environment. Previous work on pick-and-place regrasp only leveraged flat surfaces for intermediate placements, and thus is limited in the capability to reorient an object. In this paper, we extend the reorientation capability of a pick-and-place regrasp by adding a vertical pin on the working surface and using it as the intermediate location for regrasping. In particular, our method automatically computes the stable placements of an object leaning against a vertical pin, finds several force-closure grasps, generates a graph of regrasp actions, and searches for the regrasp sequence. To compare the regrasping performance with and without using pins, we evaluate the success rate and the length of regrasp sequences while performing tasks on various models. Experiments on reorientation and assembly tasks validate the benefit of using support pins for regrasping. | black In order to improve the manipulation flexibility, some recent work began to incorporate non-grasp polices in the manipulation. There are many prehensile or non-prehensile strategies besides grasping, such as non-prehensile pivoting @cite_27 and prehensile pivoting @cite_8 , non-prehensile pushing @cite_1 and prehensile pushing @cite_25 . Regrasp policies leveraging external forces or external environments have also been proposed @cite_4 @cite_25 for in-hand manipulation. To make use of these non-grasp policies, @cite_12 proposed the concept of extended transit, i.e., the transition motion will not only include the transitions between prehensile grasps, but also those between non-prehensile manipulation strategies. A similar idea was also proposed by @cite_28 , which used grasping and pushing for transition motion, but only used grasping for transfer motion. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_28",
"@cite_1",
"@cite_27",
"@cite_25",
"@cite_12"
],
"mid": [
"2013467411",
"2075976266",
"",
"2098030884",
"2162130260",
"1198461769",
"1674160379"
],
"abstract": [
"",
"In this paper we address whole-body manipulation of bulky objects by a humanoid robot. We adopt a \"pivoting\" manipulation method that allows the humanoid to displace an object without lifting, but by the support of the ground contact. First, the small-time controllability of pivoting is demonstrated. On its basis, an algorithm for collision-free pivoting motion planning is established taking into account the naturalness of motion as nonholonomic constraints. Finally, we present a whole-body motion generation method by a humanoid robot, which is verified by experiments.",
"",
"We would like to give robots the ability to position and orient parts in the plane by pushing, particularly when the parts are too large or heavy to be grasped and lifted. Unfortunately, the motion of a pushed object is generally unpredictable due to unknown support friction forces. With multiple pushing contact points, however, it is possible to find pushing directions that cause the object to remain fixed to the manipulator. These are called stable pushing directions. In this article we consider the problem of planning pushing paths using stable pushes. Pushing imposes a set of nonholonomic velocity constraints on the motion of the object, and we study the issues of local and global controllability during pushing with point contact or stable line contact. We describe a planner for finding stable pushing paths among obstacles, and the planner is demon strated on several manipulation tasks.",
"To be cost effective and highly precise, many industrial assembly robots have only four degrees of freedom (DOF) plus a binary pneumatic gripper. Such robots commonly permit parts to be rotated only about a vertical axis. However it is often necessary to reorient parts about other axes prior to assembly. In this paper the authors describe a way to orient parts about an arbitrary axis by introducing a rotating bearing between the jaws of a simple gripper. Based on this mechanism, the authors are developing a rapidly configurable vision-based system for feeding parts. In this system, a camera determines initial part pose; the robot then reorients the part to achieve a desired final pose. The authors have implemented a prototype version in their laboratory using a commercially-available robot system. >",
"This paper explores the manipulation of a grasped object by pushing it against its environment. Relying on precise arm motions and detailed models of frictional contact, prehensile pushing enables dexterous manipulation with simple manipulators, such as those currently available in industrial settings, and those likely affordable by service and field robots.",
"Manipulation planning involves planning the combined motion of objects in the environment as well as the robot motions to achieve them. In this paper, we explore a hierarchical approach to planning sequences of non-prehensile and prehensile actions. We subdivide the planning problem into three stages (object contacts, object poses and robot contacts) and thereby reduce the size of search space that is explored. We show that this approach is more efficient than earlier strategies that search in the combined robot-object configuration space directly."
]
} |
1512.04812 | 2951060686 | Context: Search-based software testing promises to provide users with the ability to generate high-quality test cases, and hence increase product quality, with a minimal increase in the time and effort required. One result that emerged out of a previous study to investigate the application of search-based software testing (SBST) in an industrial setting was the development of the Interactive Search-Based Software Testing (ISBST) system. ISBST allows users to interact with the underlying SBST system, guiding the search and assessing the results. An industrial evaluation indicated that the ISBST system could find test cases that are not created by testers employing manual techniques. The validity of the evaluation was threatened, however, by the low number of participants. Objective: This paper presents a follow-up study, to provide a more rigorous evaluation of the ISBST system. Method: To assess the ISBST system a two-way crossover controlled experiment was conducted with 58 students taking a Verification and Validation course. The NASA Task Load Index (NASA-TLX) is used to assess the workload experienced by the participants in the experiment. Results: The experimental results validated the hypothesis that the ISBST system generates test cases that are not found by the same participants employing manual testing techniques. A follow-up laboratory experiment also investigates the importance of interaction in obtaining the results. In addition to this main result, the subjective workload was assessed for each participant by means of the NASA-TLX tool. The evaluation showed that, while the ISBST system required more effort from the participants, they achieved the same performance. Conclusions: The paper provides evidence that the ISBST system develops test cases that are not found by manual techniques, and that interaction plays an important role in achieving that result. | Search-based software testing (SBST) is the application of metaheuristic optimization methods to the problem of software testing. SBST is part of the larger scope of search-based software engineering, a term coined by Harman and Jones @cite_3 . SBST has been successfully applied on a wide range of software testing problems. McMinn @cite_0 describes the use of SBST for temporal, structural, and functional testing, while @cite_39 focus their review on the use of SBST on non-functional testing. | {
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_39"
],
"mid": [
"2107729695",
"2092382400",
"2147826933"
],
"abstract": [
"Search-Based Software Testing is the use of a meta-heuristic optimizing search technique, such as a Genetic Algorithm, to automate or partially automate a testing task, for example the automatic generation of test data. Key to the optimization process is a problem-specific fitness function. The role of the fitness function is to guide the search to good solutions from a potentially infinite search space, within a practical time limit. Work on Search-Based Software Testing dates back to 1976, with interest in the area beginning to gather pace in the 1990s. More recently there has been an explosion of the amount of work. This paper reviews past work and the current state of the art, and discusses potential future research areas and open problems that remain in the field.",
"Abstract This paper claims that a new field of software engineering research and practice is emerging: search-based software engineering. The paper argues that software engineering is ideal for the application of metaheuristic search techniques, such as genetic algorithms, simulated annealing and tabu search. Such search-based techniques could provide solutions to the difficult problems of balancing competing (and some times inconsistent) constraints and may suggest ways of finding acceptable solutions in situations where perfect solutions are either theoretically impossible or practically infeasible. In order to develop the field of search-based software engineering, a reformulation of classic software engineering problems as search problems is required. The paper briefly sets out key ingredients for successful reformulation and evaluation criteria for search-based software engineering.",
"Search-based software testing is the application of metaheuristic search techniques to generate software tests. The test adequacy criterion is transformed into a fitness function and a set of solutions in the search space are evaluated with respect to the fitness function using a metaheuristic search technique. The application of metaheuristic search techniques for testing is promising due to the fact that exhaustive testing is infeasible considering the size and complexity of software under test. Search-based software testing has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional) and grey-box (combination of structural and functional) testing. In addition, metaheuristic search techniques have also been applied to test non-functional properties. The overall objective of undertaking this systematic review is to examine existing work into non-functional search-based software testing (NFSBST). We are interested in types of non-functional testing targeted using metaheuristic search techniques, different fitness functions used in different types of search-based non-functional testing and challenges in the application of these techniques. The systematic review is based on a comprehensive set of 35 articles obtained after a multi-stage selection process and have been published in the time span 1996-2007. The results of the review show that metaheuristic search techniques have been applied for non-functional testing of execution time, quality of service, security, usability and safety. A variety of metaheuristic search techniques are found to be applicable for non-functional testing including simulated annealing, tabu search, genetic algorithms, ant colony methods, grammatical evolution, genetic programming (and its variants including linear genetic programming) and swarm intelligence methods. The review reports on different fitness functions used to guide the search for each of the categories of execution time, safety, usability, quality of service and security; along with a discussion of possible challenges in the application of metaheuristic search techniques."
]
} |
1512.04812 | 2951060686 | Context: Search-based software testing promises to provide users with the ability to generate high-quality test cases, and hence increase product quality, with a minimal increase in the time and effort required. One result that emerged out of a previous study to investigate the application of search-based software testing (SBST) in an industrial setting was the development of the Interactive Search-Based Software Testing (ISBST) system. ISBST allows users to interact with the underlying SBST system, guiding the search and assessing the results. An industrial evaluation indicated that the ISBST system could find test cases that are not created by testers employing manual techniques. The validity of the evaluation was threatened, however, by the low number of participants. Objective: This paper presents a follow-up study, to provide a more rigorous evaluation of the ISBST system. Method: To assess the ISBST system a two-way crossover controlled experiment was conducted with 58 students taking a Verification and Validation course. The NASA Task Load Index (NASA-TLX) is used to assess the workload experienced by the participants in the experiment. Results: The experimental results validated the hypothesis that the ISBST system generates test cases that are not found by the same participants employing manual testing techniques. A follow-up laboratory experiment also investigates the importance of interaction in obtaining the results. In addition to this main result, the subjective workload was assessed for each participant by means of the NASA-TLX tool. The evaluation showed that, while the ISBST system required more effort from the participants, they achieved the same performance. Conclusions: The paper provides evidence that the ISBST system develops test cases that are not found by manual techniques, and that interaction plays an important role in achieving that result. | At the medium level, a user can more directly guide the search by replacing the fitness function. Takagi proposed Interactive Evolutionary Computation (IEC), which he describes as an Evolutionary Computation (EC) that optimizes systems based on subjective human evaluation'' @cite_21 . This would allow the human user to guide the search according to their preference, intuition, emotion and psychological aspects'' @cite_21 . IEC could then see a wider spectrum of applications, including arts and animation. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2151019861"
],
"abstract": [
"We survey the research on interactive evolutionary computation (IEC). The IEC is an EC that optimizes systems based on subjective human evaluation. The definition and features of the IEC are first described and then followed by an overview of the IEC research. The overview primarily consists of application research and interface research. In this survey the IEC application fields include graphic arts and animation, 3D computer graphics lighting, music, editorial design, industrial design, facial image generation, speed processing and synthesis, hearing aid fitting, virtual reality, media database retrieval, data mining, image processing, control and robotics, food industry, geophysics, education, entertainment, social system, and so on. The interface research to reduce human fatigue is also included. Finally, we discuss the IEC from the point of the future research direction of computational intelligence. This paper features a survey of about 250 IEC research papers."
]
} |
1512.04812 | 2951060686 | Context: Search-based software testing promises to provide users with the ability to generate high-quality test cases, and hence increase product quality, with a minimal increase in the time and effort required. One result that emerged out of a previous study to investigate the application of search-based software testing (SBST) in an industrial setting was the development of the Interactive Search-Based Software Testing (ISBST) system. ISBST allows users to interact with the underlying SBST system, guiding the search and assessing the results. An industrial evaluation indicated that the ISBST system could find test cases that are not created by testers employing manual techniques. The validity of the evaluation was threatened, however, by the low number of participants. Objective: This paper presents a follow-up study, to provide a more rigorous evaluation of the ISBST system. Method: To assess the ISBST system a two-way crossover controlled experiment was conducted with 58 students taking a Verification and Validation course. The NASA Task Load Index (NASA-TLX) is used to assess the workload experienced by the participants in the experiment. Results: The experimental results validated the hypothesis that the ISBST system generates test cases that are not found by the same participants employing manual testing techniques. A follow-up laboratory experiment also investigates the importance of interaction in obtaining the results. In addition to this main result, the subjective workload was assessed for each participant by means of the NASA-TLX tool. The evaluation showed that, while the ISBST system required more effort from the participants, they achieved the same performance. Conclusions: The paper provides evidence that the ISBST system develops test cases that are not found by manual techniques, and that interaction plays an important role in achieving that result. | Alternatively, a system may require the human user to only replace the fitness functions at certain times, e.g. to serve as a tie-breaker, when the existing fitness functions cannot rank certain candidates @cite_27 . | {
"cite_N": [
"@cite_27"
],
"mid": [
"2088119904"
],
"abstract": [
"The order in which requirements are implemented in a system affects the value delivered to the final users in the successive releases of the system. Requirements prioritization aims at ranking the requirements so as to trade off user priorities and implementation constraints, such as technical dependencies among requirements and necessarily limited resources allocated to the project. Requirement analysts possess relevant knowledge about the relative importance of requirements. We use an Interactive Genetic Algorithm to produce a requirement ordering which complies with the existing priorities, satisfies the technical constraints and takes into account the relative preferences elicited from the user. On a real case study, we show that this approach improves non interactive optimization, ignoring the elicited preferences, and that it can handle a number of requirements which is otherwise problematic for state of the art techniques."
]
} |
1512.04812 | 2951060686 | Context: Search-based software testing promises to provide users with the ability to generate high-quality test cases, and hence increase product quality, with a minimal increase in the time and effort required. One result that emerged out of a previous study to investigate the application of search-based software testing (SBST) in an industrial setting was the development of the Interactive Search-Based Software Testing (ISBST) system. ISBST allows users to interact with the underlying SBST system, guiding the search and assessing the results. An industrial evaluation indicated that the ISBST system could find test cases that are not created by testers employing manual techniques. The validity of the evaluation was threatened, however, by the low number of participants. Objective: This paper presents a follow-up study, to provide a more rigorous evaluation of the ISBST system. Method: To assess the ISBST system a two-way crossover controlled experiment was conducted with 58 students taking a Verification and Validation course. The NASA Task Load Index (NASA-TLX) is used to assess the workload experienced by the participants in the experiment. Results: The experimental results validated the hypothesis that the ISBST system generates test cases that are not found by the same participants employing manual testing techniques. A follow-up laboratory experiment also investigates the importance of interaction in obtaining the results. In addition to this main result, the subjective workload was assessed for each participant by means of the NASA-TLX tool. The evaluation showed that, while the ISBST system required more effort from the participants, they achieved the same performance. Conclusions: The paper provides evidence that the ISBST system develops test cases that are not found by manual techniques, and that interaction plays an important role in achieving that result. | At the lowest level, interaction can be very detailed. Bush and Sayama @cite_6 require the human to be the main driver of the search process'' by selecting the individuals and the evolutionary operators to be applied. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2125525672"
],
"abstract": [
"We propose hyperinteractive evolutionary computation (HIEC), a class of IEC in which the user actively chooses when and how each evolutionary operator is applied. To evaluate the benefits of HIEC, we conducted three human-subject experiments. The first two experiments showed that HIEC is associated with a more positive user experience and produced higher quality designs. The third experiment demonstrates the potential of HIEC as a research tool with which one can record the evolutionary actions taken by human users. Implications, limitations, and future directions of research are discussed."
]
} |
1512.04812 | 2951060686 | Context: Search-based software testing promises to provide users with the ability to generate high-quality test cases, and hence increase product quality, with a minimal increase in the time and effort required. One result that emerged out of a previous study to investigate the application of search-based software testing (SBST) in an industrial setting was the development of the Interactive Search-Based Software Testing (ISBST) system. ISBST allows users to interact with the underlying SBST system, guiding the search and assessing the results. An industrial evaluation indicated that the ISBST system could find test cases that are not created by testers employing manual techniques. The validity of the evaluation was threatened, however, by the low number of participants. Objective: This paper presents a follow-up study, to provide a more rigorous evaluation of the ISBST system. Method: To assess the ISBST system a two-way crossover controlled experiment was conducted with 58 students taking a Verification and Validation course. The NASA Task Load Index (NASA-TLX) is used to assess the workload experienced by the participants in the experiment. Results: The experimental results validated the hypothesis that the ISBST system generates test cases that are not found by the same participants employing manual testing techniques. A follow-up laboratory experiment also investigates the importance of interaction in obtaining the results. In addition to this main result, the subjective workload was assessed for each participant by means of the NASA-TLX tool. The evaluation showed that, while the ISBST system required more effort from the participants, they achieved the same performance. Conclusions: The paper provides evidence that the ISBST system develops test cases that are not found by manual techniques, and that interaction plays an important role in achieving that result. | A more serious problem is that the number of evaluations that a human can perform is limited, as boredom and fatigue will set in. This is even more of an issue at the lowest level of abstraction, where the human user is involved in evolving each candidate. Fatigue has already been identified as a major concern, and efforts to alleviate the problem have been proposed @cite_9 . Alternatives have, therefore, been proposed that make it easier for the human user to interact, by selecting candidate solutions they favor and dismissing those they do not @cite_13 ; or focusing on the search objectives or the fitness values more than on the candidate solutions themselves @cite_34 @cite_8 . | {
"cite_N": [
"@cite_9",
"@cite_34",
"@cite_13",
"@cite_8"
],
"mid": [
"2099020927",
"2114395490",
"2040378535",
""
],
"abstract": [
"We describe two approaches to reducing human fatigue in interactive evolutionary computation (IEC). A predictor function is used to estimate the human user's score, thus reducing the amount of effort required by the human user during the evolution process. The fuzzy system and four machine learning classifier algorithms are presented. Their performance in a real-world application, the IEC-based design of a micromachine resonating mass, is evaluated. The fuzzy system was composed of four simple rules, but was able to accurately predict the user's score 77 of the time on average. This is equivalent to a 51 reduction of human effort compared to using IEC without the predictor. The four machine learning approaches tested were k-nearest neighbors, decision tree, AdaBoosted decision tree, and support vector machines. These approaches achieved good accuracy on validation tests, but because of the great diversity in user scoring behavior, were unable to achieve equivalent results on the user test data.",
"We propose Visualized IEC as an interactive evolutionary computation (IEC) with visualizing individuals in a multidimensional searching space in a 2D space. This visualization helps us envision the landscape of an n-D searching space; so that it is easier for us to join an EC search, by indicating the possible global optimum estimated in the 2D mapped space. We experimentally evaluate the effect of visualization using a benchmark function. We use self-organizing maps for this projection of individuals onto a 2D space. The experimental result shows that the convergence speed of the GA with human search on the visualized space is at least five times faster than a conventional GA.",
"The paper introduces the concept of an Interactive Evolutionary Design System (IEDS) that supports the engineering designer during the conceptual preliminary stages of the design process. Requirement during these early stages relates primarily to design search and exploration across a poorly defined space as the designer's knowledge base concerning the problem area develops. Multiobjective satisfaction plays a major role, and objectives are likely to be ill-defined and their relative importance uncertain. Interactive evolutionary search and exploration provides information to the design team that contributes directly to their overall understanding of the problem domain in terms of relevant objectives, constraints, and variable ranges. This paper describes the development of certain elements within an interactive evolutionary conceptual design environment that allows off-line processing of such information leading to a redefinition of the design space. Such redefinition may refer to the inclusion or removal of objectives, changes concerning their relative importance, or the reduction of variable ranges as a better understanding of objective sensitivity is established. The emphasis, therefore, moves from a multiobjective optimization over a preset number of generations to a relatively continuous interactive evolutionary search that results in the optimal definition of both the variable and objective space relating to the design problem at hand. The paper describes those elements of the IEDS relating to such multiobjective information gathering and subsequent design space redefinition.",
""
]
} |
1512.04912 | 2949513114 | Consumer spending accounts for a large fraction of the US economic activity. Increasingly, consumer activity is moving to the web, where digital traces of shopping and purchases provide valuable data about consumer behavior. We analyze these data extracted from emails and combine them with demographic information to characterize, model, and predict consumer behavior. Breaking down purchasing by age and gender, we find that the amount of money spent on online purchases grows sharply with age, peaking in late 30s. Men are more frequent online purchasers and spend more money when compared to women. Linking online shopping to income, we find that shoppers from more affluent areas purchase more expensive items and buy them more frequently, resulting in significantly more money spent on online purchases. We also look at dynamics of purchasing behavior and observe daily and weekly cycles in purchasing behavior, similarly to other online activities. More specifically, we observe temporal patterns in purchasing behavior suggesting shoppers have finite budgets: the more expensive an item, the longer the shopper waits since the last purchase to buy it. We also observe that shoppers who email each other purchase more similar items than socially unconnected shoppers, and this effect is particularly evident among women. Finally, we build a model to predict when shoppers will make a purchase and how much will spend on it. We find that temporal features improve prediction accuracy over competitive baselines. A better understanding of consumer behavior can help improve marketing efforts and make online shopping more pleasant and efficient. | can influence the shopping behavior and the perception of the shopping experience online. Men value the practical advantages of online shopping more and consider a detailed product description and fair pricing significantly more important than women do. In contrast, some surveys have found that women, despite the ease of use of e-commerce sites, dislike more than men the lack of a physical experience of the shop and value more the visibility of wide selections of items rather than accurate product specifications @cite_21 @cite_12 @cite_36 @cite_5 . Unlike gender, the effect of age on the purchase behavior seems to be minor, with older people searching less for online items to buy but not exhibiting lower purchase frequency @cite_6 . With extensive evidence from a large-scale dataset we find that age greatly impacts the amount of money spent online and the number of items purchased. | {
"cite_N": [
"@cite_36",
"@cite_21",
"@cite_6",
"@cite_5",
"@cite_12"
],
"mid": [
"",
"1999495747",
"2048864342",
"2043162266",
"2098113555"
],
"abstract": [
"",
"Women have yet to welcome Web-based shopping as readily as men. A primary factor for this state is how men and women view shopping. Understanding those differences will help vendors address this vital pool of consumers.",
"Purpose – This paper examines the shopping and buying behavior of younger and older online shoppers as mediated by their attitudes toward internet shopping.Design methodology approach – Over 300 students and staff from a US university completed a survey regarding their online shopping and buying experiences for 17 products.Findings – The results show that, while older online shoppers search for significantly fewer products than their younger counterparts, they actually purchase as much as younger consumers. Attitudinal factors explained more variance in online searching behavior. Age explained more variance in purchasing behavior if the consumer had first searched for the product online.Research limitations implications – The limitations of the present research are threefold. First, the sample was restricted to university faculty, staff and students. Second, a better measure of the hedonic motivation construct is needed. Third, additional independent measures such as income should be included to understan...",
"Despite the increasing number of online users and products that are being offered on the Web, there is relatively little work that specifically examines the role of gender and educational level on the attitudes of Internet users in the Singapore context. Our findings reveal that there is a general consensus amongst Singaporeans that the Internet is a convenient medium for information search or making purchases. The better-educated respondents seem to be less concerned with security issues. They also perceive that Internet shopping provides better prices and more cost savings. Females indicate a strong dislike for not being able to savour a physically fulfilling shopping experience online.",
"Despite the spread of e-commerce, few studies have investigated gender-based differences in the adoption of consumer-oriented electronic commerce. Theory and evidence from other domains indicates that such differences may exist. Using innovation diffusion theory as a framework, we empirically investigate whether the impact of beliefs regarding the characteristics of e-commerce and the trustworthiness of Web merchants on intentions to use e-commerce differ according to gender. Results indicate that such differences do exist. Perceived compatibility and visibility have greater impacts for women. In contrast, males' use intentions are more driven by perceived relative advantage and result demonstrability. No differences were found for perceived ease of use and Web merchant trustworthiness."
]
} |
1512.04912 | 2949513114 | Consumer spending accounts for a large fraction of the US economic activity. Increasingly, consumer activity is moving to the web, where digital traces of shopping and purchases provide valuable data about consumer behavior. We analyze these data extracted from emails and combine them with demographic information to characterize, model, and predict consumer behavior. Breaking down purchasing by age and gender, we find that the amount of money spent on online purchases grows sharply with age, peaking in late 30s. Men are more frequent online purchasers and spend more money when compared to women. Linking online shopping to income, we find that shoppers from more affluent areas purchase more expensive items and buy them more frequently, resulting in significantly more money spent on online purchases. We also look at dynamics of purchasing behavior and observe daily and weekly cycles in purchasing behavior, similarly to other online activities. More specifically, we observe temporal patterns in purchasing behavior suggesting shoppers have finite budgets: the more expensive an item, the longer the shopper waits since the last purchase to buy it. We also observe that shoppers who email each other purchase more similar items than socially unconnected shoppers, and this effect is particularly evident among women. Finally, we build a model to predict when shoppers will make a purchase and how much will spend on it. We find that temporal features improve prediction accuracy over competitive baselines. A better understanding of consumer behavior can help improve marketing efforts and make online shopping more pleasant and efficient. | is also a crucial factor that steers customer behavior during online shopping. Often, social media is used to communicate purchase intents, which can be automatically detected with text analysis @cite_4 . Also, social ties allow for the propagation of information about effective shopping practices, such as finding the most convenient store to buy from @cite_45 or recommending what to buy next @cite_35 . Survey-based studies have found that shopping recommendations can increase the willingness of buying among women rather than men @cite_2 . | {
"cite_N": [
"@cite_35",
"@cite_45",
"@cite_4",
"@cite_2"
],
"mid": [
"1994473607",
"2109816759",
"2036282699",
"2111291834"
],
"abstract": [
"We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We analyze how user behavior varies within user communities defined by a recommendation network. Product purchases follow a ‘long tail’ where a significant share of purchases belongs to rarely sold items. We establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies communities, product, and pricing categories for which viral marketing seems to be very effective.",
"While social interactions are critical to understanding consumer behavior, the relationship between social and commerce networks has not been explored on a large scale. We analyze Taobao, a Chinese consumer marketplace that is the world's largest e-commerce website. What sets Taobao apart from its competitors is its integrated instant messaging tool, which buyers can use to ask sellers about products or ask other buyers for advice. In our study, we focus on how an individual's commercial transactions are embedded in their social graphs. By studying triads and the directed closure process, we quantify the presence of information passing and gain insights into when different types of links form in the network. Using seller ratings and review information, we then quantify a price of trust. How much will a consumer pay for transaction with a trusted seller? We conclude by modeling this consumer choice problem: if a buyer wishes to purchase a particular product, how does (s)he decide which store to purchase it from? By analyzing the performance of various feature sets in an information retrieval setting, we demonstrate how the social graph factors into understanding consumer behavior.",
"This document describes techniques for identifying purchase intent in social posts. In one or more implementations, a topic is received and social posts to one or more social networks that are related to the topic are collected. Then, one or more purchase intent posts expressing purchase intent towards the topic are identified from the collected social posts. In one or more implementations a purchase intent model, usable to identify social posts expressing purchase intent, is built from a training corpus of annotated social posts.",
"This article examines how men and women differ in both their perceptions of the risks associated with shopping online and the effect of receiving a site recommendation from a friend. The first study examines how gender affects the perceptions of the probability of negative outcomes and the severity of such negative outcomes should they occur for five risks associated with buying online (i.e., credit card misuse, fraudulent sites, loss of privacy, shipping problems, and product failure). The second study examines gender differences in the effect of receiving a recommendation from a friend on perceptions of online purchase risk. The third study experimentally tests whether, compared to men, women will be more likely to increase their willingness to purchase online if they receive a site recommendation from a friend. The results suggest that, even when controlling for differences in Internet usage, women perceive a higher level of risk in online purchasing than do men. In addition, having a site recommended by a friend leads to both a greater reduction in perceived risk and a stronger increase in willingness to buy online among women than among men."
]
} |
1512.04912 | 2949513114 | Consumer spending accounts for a large fraction of the US economic activity. Increasingly, consumer activity is moving to the web, where digital traces of shopping and purchases provide valuable data about consumer behavior. We analyze these data extracted from emails and combine them with demographic information to characterize, model, and predict consumer behavior. Breaking down purchasing by age and gender, we find that the amount of money spent on online purchases grows sharply with age, peaking in late 30s. Men are more frequent online purchasers and spend more money when compared to women. Linking online shopping to income, we find that shoppers from more affluent areas purchase more expensive items and buy them more frequently, resulting in significantly more money spent on online purchases. We also look at dynamics of purchasing behavior and observe daily and weekly cycles in purchasing behavior, similarly to other online activities. More specifically, we observe temporal patterns in purchasing behavior suggesting shoppers have finite budgets: the more expensive an item, the longer the shopper waits since the last purchase to buy it. We also observe that shoppers who email each other purchase more similar items than socially unconnected shoppers, and this effect is particularly evident among women. Finally, we build a model to predict when shoppers will make a purchase and how much will spend on it. We find that temporal features improve prediction accuracy over competitive baselines. A better understanding of consumer behavior can help improve marketing efforts and make online shopping more pleasant and efficient. | in offline stores have been extensively investigated as they have direct consequences on the revenue potential of retailers and advertisers. Survey-based studies attempted to isolate the factors that lead a customer to buy an item or, in other words, to understand what the strongest predictors of a purchase are. Although the mere amount of online activity of a customer can predict to some extent the occurrence of a future purchase @cite_1 , multifaceted predictive models have been proposed in the past. Features related to the phase of information gathering (access to search features, prior trust of the website) and to the purchase potential (monetary resources, product value) can often predict whether a given item will be purchased or not @cite_0 @cite_16 . | {
"cite_N": [
"@cite_0",
"@cite_16",
"@cite_1"
],
"mid": [
"2084288664",
"2118436669",
"2069722994"
],
"abstract": [
"This paper tests the ability of two consumer theories-the theory of reasoned action and the theory of planned behavior-in predicting consumer online grocery buying intention. In addition, a comparison of the two theories is conducted. Data were collected from two web-based surveys of Danish (n=1222) and Swedish (n=1038) consumers using self-administered questionnaires. These results suggest that the theory of planned behavior (with the inclusion of a path from subjective norm to attitude) provides the best fit to the data and explains the highest proportion of variation in online grocery buying intention.",
"This paper extends Ajzen's (1991) theory of planned behavior (TPB) to explain and predict the process of e-commerce adoption by consumers. The process is captured through two online consumer behaviors: (1) getting information and (2) purchasing a product from a Web vendor. First, we simultaneously model the association between these two contingent online behaviors and their respective intentions by appealing to consumer behavior theories and the theory of implementation intentions, respectively. Second, following TPB, we derive for each behavior its intention, attitude, subjective norm, and perceived behavioral control (PBC). Third, we elicit and test a comprehensive set of salient beliefs for each behavior. A longitudinal study with online consumers supports the proposed e-commerce adoption model, validating the predictive power of TPB and the proposed conceptualization of PBC as a higher-order factor formed by self-efficacy and controllability. Our findings stress the importance of trust and technology adoption variables (perceived usefulness and ease of use) as salient beliefs for predicting e-commerce adoption, justifying the integration of trust and technology adoption variables within the TPB framework. In addition, technological characteristics (download delay, Website navigability, and information protection), consumer skills, time and monetary resources, and product characteristics (product diagnosticity and product value) add to the explanatory and predictive power of our model. Implications for Information Systems, e-commerce, TPB, and the study of trust are discussed.",
"Consumers worldwide can shop online 24 hours a day, seven days a week, 365 days a year. Some market sectors, including insurance, financial services, computer hardware and software, travel, books, music, video, flowers, and automobiles, are experiencing rapid growth in online sales. For example, in Jan. 1999, Dell Computer Corp. was selling an average of @math 294 billion by 2002, online retailing raises many questions about how to market on the Net."
]
} |
1512.04866 | 2949122587 | An octilinear drawing of a planar graph is one in which each edge is drawn as a sequence of horizontal, vertical and diagonal at 45 degrees line-segments. For such drawings to be readable, special care is needed in order to keep the number of bends small. As the problem of finding planar octilinear drawings of minimum number of bends is NP-hard, in this paper we focus on upper and lower bounds. From a recent result of on the slope number of planar graphs, we can derive an upper bound of 4n-10 bends for 8-planar graphs with n vertices. We considerably improve this general bound and corresponding previous ones for triconnected 4-, 5- and 6-planar graphs. We also derive non-trivial lower bounds for these three classes of graphs by a technique inspired by the network flow formulation of Tamassia. | Octilinear drawings form a natural extension of the so-called , which allow for horizontal and vertical edge segments only. For such drawings, the bend minimization problem can be solved efficiently, assuming that the input is an embedded graph @cite_12 . However, the corresponding minimization problem over all embeddings of the input graph is NP-hard @cite_1 . Note that in @cite_12 the author describes how one can extend his approach, so to compute a bend-optimal octilinear representation Recall that a representation of a graph describes the angles and the bends of a drawing, neglecting its exact geometry @cite_12 . of any given embedded @math -planar graph. However, such a representation may not be realizable by a corresponding planar octilinear drawing @cite_17 . | {
"cite_N": [
"@cite_1",
"@cite_12",
"@cite_17"
],
"mid": [
"2018091581",
"2002025203",
"1986266847"
],
"abstract": [
"A directed graph is upward planar if it can be drawn in the plane such that every edge is a monotonically increasing curve in the vertical direction and no two edges cross. An undirected graph is rectilinear planar if it can be drawn in the plane such that every edge is a horizontal or vertical segment and no two edges cross. Testing upward planarity and rectilinear planarity are fundamental problems in the effective visualization of various graph and network structures. For example, upward planarity is useful for the display of order diagrams and subroutine-call graphs, while rectilinear planarity is useful for the display of circuit schematics and entity-relationship diagrams. We show that upward planarity testing and rectilinear planarity testing are NP-complete problems. We also show that it is NP-hard to approximate the minimum number of bends in a planar orthogonal drawing of an n-vertex graph with an @math error for any @math .",
"Given a planar graph G together with a planar representation P, a region preserving grid embedding of G is a planar embedding of G in the rectilinear grid that has planar representation isomorphic to P. In this paper, an algorithm is presented that computes a region preserving grid embedding with the minimum number of bends in edges. This algorithm makes use of network flow techniques, and runs in time @math , where n is the number of vertices of the graph. Constrained versions of the problem are also considered, and most results are extended to k-gonal graphs, i.e., graphs whose edges are sequences of segments with slope multiple of @math degrees. Applications of the above results can be found in several areas: VLSI circuit layout, architectural design, communication by light or microwave, transportation problems, and automatic layout of graphlike diagrams.",
"We connect two aspects of graph drawing, namely angular resolution, and the possibility to draw with all angles an integer multiple of 2π d. A planar graph with angular resolution at least π 2c an be drawn with all angles an integer multiple of π 2 (rectilinear). For d =4 , d> 2, an angular resolution of 2π d does not imply that the graph can be drawn with all angles an integer multiple of 2π d. We argue that the exceptional situation for d = 4 is due to the absence of triangles in the rectangular grid."
]
} |
1512.04866 | 2949122587 | An octilinear drawing of a planar graph is one in which each edge is drawn as a sequence of horizontal, vertical and diagonal at 45 degrees line-segments. For such drawings to be readable, special care is needed in order to keep the number of bends small. As the problem of finding planar octilinear drawings of minimum number of bends is NP-hard, in this paper we focus on upper and lower bounds. From a recent result of on the slope number of planar graphs, we can derive an upper bound of 4n-10 bends for 8-planar graphs with n vertices. We considerably improve this general bound and corresponding previous ones for triconnected 4-, 5- and 6-planar graphs. We also derive non-trivial lower bounds for these three classes of graphs by a technique inspired by the network flow formulation of Tamassia. | For orthogonal drawings, several bounds on the total number of bends are known. Biedl @cite_9 presents lower bounds for graphs of maximum degree @math based on their connectivity (simply connected, biconnected or triconnected), planarity (planar or not) and simplicity (simple or non-simple with multiedges or selfloops). It is also known that any @math -planar graph (except for the octahedron graph) admits a planar orthogonal drawing with at most two bends per edge @cite_0 @cite_15 . Trivially, this yields an upper bound of @math bends, which can be improved to @math @cite_0 . Note that the best known lower bound is due to @cite_2 , who presented @math -planar graphs requiring @math bends. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_2",
"@cite_15"
],
"mid": [
"2117205241",
"1568439834",
"2067795681",
"2012855423"
],
"abstract": [
"Abstract An orthogonal drawing of a graph is an embedding in the plane such that all edges are drawn as sequences of horizontal and vertical segments. Linear time and space algorithms for drawing biconnected planar graphs orthogonally with at most 2n + 4 bends on a grid of size n × n are known in the literature. In this paper we generalize this result to connected and non-planar graphs. Moreover, we show that in almost all cases each edge is bent at most twice. The algorithm handles both planar and non-planar graphs at the same time.",
"An orthogonal drawing is an embedding of a graph such that edges are drawn as sequences of horizontal and vertical segments. In this paper we explore lower bounds. We find lower bounds on the number of bends when crossings are allowed, and lower bounds on both the grid-size and the number of bends for planar and plane drawings.",
"Abstract We study planar orthogonal drawings of graphs and provide lower bounds on the number of bends along the edges. We exhibit graphs on n vertices that require Ω( n ) bends in any layout, and show that there exist optimal drawings that require Ω( n ) bends and have all of them on a single edge of length Ω( n 2 ). This work finds applications in VLSI layout, aesthetic graph drawing, and communication by light or microwave.",
"Abstract In this paper we describe a linear algorithm for embedding planar graphs in the rectilinear two-dimensional grid, where vertices are grid points and edges are noncrossing grid paths. The main feature of our algorithm is that each edge is guaranteed to have at most 2 bends (with the single exception of the octahedron for which 3 bends are needed). The total number of bends is at most2 n + 4 if the graph is biconnected and at most(7 3) n in the general case. The area is( n + 1) 2 in the worst case. This problem has several applications to VLSI circuit design, aesthetic layout of diagrams, computational geometry."
]
} |
1512.05030 | 2952753782 | Morpho-syntactic lexicons provide information about the morphological and syntactic roles of words in a language. Such lexicons are not available for all languages and even when available, their coverage can be limited. We present a graph-based semi-supervised learning method that uses the morphological, syntactic and semantic relations between words to automatically construct wide coverage lexicons from small seed sets. Our method is language-independent, and we show that we can expand a 1000 word seed lexicon to more than 100 times its size with high quality for 11 languages. In addition, the automatically created lexicons provide features that improve performance in two downstream tasks: morphological tagging and dependency parsing. | construct morpho-syntactic lexicons by incrementally merging inflectional classes with shared morphological features. Natural language lexicons have often been created from smaller seed lexcions using various methods. use patterns extracted over a large corpus to learn semantic lexicons from smaller seed lexicons using bootstrapping. use distributional similarity scores across instances to propagate attributes using random walks over a graph. learn potential semantic frames for unknown predicates by expanding a seed frame lexicon. Sentiment lexicons containing semantic polarity labels for words and phrases have been created using bootstrapping and graph-based learning @cite_2 @cite_34 @cite_15 @cite_78 @cite_47 . | {
"cite_N": [
"@cite_78",
"@cite_15",
"@cite_2",
"@cite_47",
"@cite_34"
],
"mid": [
"1513756337",
"2097162496",
"1549434858",
"2131305515",
"2160250477"
],
"abstract": [
"We propose a method for extracting semantic orientations of phrases (pairs of an adjective and a noun): positive, negative, or neutral. Given an adjective, the semantic orientation classification of phrases can be reduced to the classification of words. We construct a lexical network by connecting similar related words. In the network, each node has one of the three orientation values and the neighboring nodes tend to have the same value. We adopt the Potts model for the probability model of the lexical network. For each adjective, we estimate the states of the nodes, which indicate the semantic orientations of the adjective-noun pairs. Unlike existing methods for phrase classification, the proposed method can classify phrases consisting of unseen words. We also propose to use unlabeled data for a seed set of probability computation. Empirical evaluation shows the effectiveness of the proposed method.",
"We examine the viability of building large polarity lexicons semi-automatically from the web. We begin by describing a graph propagation framework inspired by previous work on constructing polarity lexicons from lexical graphs (Kim and Hovy, 2004; Hu and Liu, 2004; Esuli and Sabastiani, 2009; Blair-, 2008; Rao and Ravichandran, 2009). We then apply this technique to build an English lexicon that is significantly larger than those previously studied. Crucially, this web-derived lexicon does not require WordNet, part-of-speech taggers, or other language-dependent resources typical of sentiment analysis systems. As a result, the lexicon is not limited to specific word classes -- e.g., adjectives that occur in WordNet -- and in fact contains slang, misspellings, multiword expressions, etc. We evaluate a lexicon derived from English documents, both qualitatively and quantitatively, and show that it provides superior performance to previously studied lexicons, including one derived from WordNet.",
"This article discusses a bootstrapping method for building subjectivity lexicons for languages with scarce resources.",
"The explosion of Web opinion data has made essential the need for automatic tools to analyze and understand people's sentiments toward different topics. In most sentiment analysis applications, the sentiment lexicon plays a central role. However, it is well known that there is no universally optimal sentiment lexicon since the polarity of words is sensitive to the topic domain. Even worse, in the same domain the same word may indicate different polarities with respect to different aspects. For example, in a laptop review, \"large\" is negative for the battery aspect while being positive for the screen aspect. In this paper, we focus on the problem of learning a sentiment lexicon that is not only domain specific but also dependent on the aspect in context given an unlabeled opinionated text collection. We propose a novel optimization framework that provides a unified and principled way to combine different sources of information for learning such a context-dependent sentiment lexicon. Experiments on two data sets (hotel reviews and customer feedback surveys on printers) show that our approach can not only identify new sentiment words specific to the given domain but also determine the different polarities of a word depending on the aspect in context. In further quantitative evaluation, our method is proved to be effective in constructing a high quality lexicon by comparing with a human annotated gold standard. In addition, using the learned context-dependent sentiment lexicon improved the accuracy in an aspect-level sentiment classification task.",
"Sentiment analysis often relies on a semantic orientation lexicon of positive and negative words. A number of approaches have been proposed for creating such lexicons, but they tend to be computationally expensive, and usually rely on significant manual annotation and large corpora. Most of these methods use WordNet. In contrast, we propose a simple approach to generate a high-coverage semantic orientation lexicon, which includes both individual words and multi-word expressions, using only a Roget-like thesaurus and a handful of affixes. Further, the lexicon has properties that support the Polyanna Hypothesis. Using the General Inquirer as gold standard, we show that our lexicon has 14 percentage points more correct entries than the leading WordNet-based high-coverage lexicon (SentiWordNet). In an extrinsic evaluation, we obtain significantly higher performance in determining phrase polarity using our thesaurus-based lexicon than with any other. Additionally, we explore the use of visualization techniques to gain insight into the our algorithm beyond the evaluations mentioned above."
]
} |
1512.05030 | 2952753782 | Morpho-syntactic lexicons provide information about the morphological and syntactic roles of words in a language. Such lexicons are not available for all languages and even when available, their coverage can be limited. We present a graph-based semi-supervised learning method that uses the morphological, syntactic and semantic relations between words to automatically construct wide coverage lexicons from small seed sets. Our method is language-independent, and we show that we can expand a 1000 word seed lexicon to more than 100 times its size with high quality for 11 languages. In addition, the automatically created lexicons provide features that improve performance in two downstream tasks: morphological tagging and dependency parsing. | In general, graph-based semi-supervised learning is heavily used in NLP @cite_26 @cite_5 . Graph-based learning has been used for class-instance acquisition @cite_4 , text classification @cite_48 , summarization @cite_52 , structured prediction problems @cite_75 @cite_16 @cite_60 etc. Our work differs from most of these approaches in that we specifically learn how different features shared between the nodes can correspond to either the propagation of an attribute or an inversion of the attribute value (cf. equ ). In terms of the capability of inverting an attribute value, our method is close to , who present a framework to include dissimilarity between nodes and , who learn which edges can be excluded for label propagation. In terms of featurizing the edges, our work resembles previous work which measured similarity between nodes in terms of similarity between the feature types that they share @cite_80 @cite_10 . Our work is also related to graph-based metric learning, where the objective is to learn a suitable distance metric between the nodes of a graph for solving a given problem @cite_33 @cite_38 . | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_4",
"@cite_33",
"@cite_60",
"@cite_80",
"@cite_48",
"@cite_10",
"@cite_52",
"@cite_5",
"@cite_16",
"@cite_75"
],
"mid": [
"1563100697",
"2106829288",
"2138288827",
"2106053110",
"2162302090",
"2138437702",
"2132346211",
"2252039676",
"2110693578",
"",
"2142523187",
"1709989312"
],
"abstract": [
"In many domain adaption formulations, it is assumed to have large amount of unlabeled data from the domain of interest (target domain), some portion of it may be labeled, and large amount of labeled data from other domains, also known as source domain(s). Motivated by the fact that labeled data is hard to obtain in any domain, we design algorithms for the settings in which there exists large amount of unlabeled data from all domains, small portion of which may be labeled. We build on recent advances in graph-based semi-supervised learning and supervised metric learning. Given all instances, labeled and unlabeled, from all domains, we build a large similarity graph between them, where an edge exists between two instances if they are close according to some metric. Instead of using predefined metric, as commonly performed, we feed the labeled instances into metric-learning algorithms and (re)construct a data-dependent metric, which is used to construct the graph. We employ different types of edges depending on the domain-identity of the two vertices touching it, and learn the weights of each edge. Experimental results show that our approach leads to significant reduction in classification error across domains, and performs better than two state-of-the-art models on the task of sentiment classification.",
"Graph-based Semi-supervised learning (SSL) algorithms have been successfully used in a large number of applications. These methods classify initially unlabeled nodes by propagating label information over the structure of graph starting from seed nodes. Graph-based SSL algorithms usually scale linearly with the number of distinct labels (m), and require O(m) space on each node. Unfortunately, there exist many applications of practical signicance with very large m over large graphs, demanding better space and time complexity. In this paper, we propose MAD-Sketch, a novel graph-based SSL algorithm which compactly stores label distribution on each node using Count-min Sketch, a randomized data structure. We present theoretical analysis showing that under mild conditions, MAD-Sketch can reduce space complexity at each node from O(m) to O(logm), and achieve similar savings in time complexity as well. We support our analysis through experiments on multiple real world datasets. We observe that MAD-Sketch achieves similar performance as existing state-of-the-art graph-based SSL algorithms, while requiring smaller memory footprint and at the same time achieving up to 10x speedup. We nd that MAD-Sketch is able to scale to datasets with one million labels, which is beyond the scope of existing graph-based SSL algorithms.",
"Graph-based semi-supervised learning (SSL) algorithms have been successfully used to extract class-instance pairs from large unstructured and structured text collections. However, a careful comparison of different graph-based SSL algorithms on that task has been lacking. We compare three graph-based SSL algorithms for class-instance acquisition on a variety of graphs constructed from different domains. We find that the recently proposed MAD algorithm is the most effective. We also show that class-instance extraction can be significantly improved by adding semantic information in the form of instance-attribute edges derived from an independently developed knowledge base. All of our code and data will be made publicly available to encourage reproducible research in this area.",
"The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.",
"Developing natural language processing tools for low-resource languages often requires creating resources from scratch. While a variety of semi-supervised methods exist for training from incomplete data, there are open questions regarding what types of training data should be used and how much is necessary. We discuss a series of experiments designed to shed light on such questions in the context of part-of-speech tagging. We obtain timed annotations from linguists for the low-resource languages Kinyarwanda and Malagasy (as well as English) and evaluate how the amounts of various kinds of data affect performance of a trained POS-tagger. Our results show that annotation of word types is the most important, provided a sufficiently capable semi-supervised learning infrastructure is in place to project type information onto a raw corpus. We also show that finitestate morphological analyzers are effective sources of type information when few labeled examples are available.",
"A key problem in document classification and clustering is learning the similarity between documents. Traditional approaches include estimating similarity between feature vectors of documents where the vectors are computed using TF-IDF in the bag-of-words model. However, these approaches do not work well when either similar documents do not use the same vocabulary or the feature vectors are not estimated correctly. In this paper, we represent documents and keywords using multiple layers of connected graphs. We pose the problem of simultaneously learning similarity between documents and keyword weights as an edge-weight regularization problem over the different layers of graphs. Unlike most feature weight learning algorithms, we propose an unsupervised algorithm in the proposed framework to simultaneously optimize similarity and the keyword weights. We extrinsically evaluate the performance of the proposed similarity measure on two different tasks, clustering and classification. The proposed similarity measure outperforms the similarity measure proposed by (, 2010), a state-of-the-art classification algorithm (Zhou and Burges, 2007) and three different baselines on a variety of standard, large data sets.",
"We propose a new graph-based semi-supervised learning (SSL) algorithm and demonstrate its application to document categorization. Each document is represented by a vertex within a weighted undirected graph and our proposed framework minimizes the weighted Kullback-Leibler divergence between distributions that encode the class membership probabilities of each vertex. The proposed objective is convex with guaranteed convergence using an alternating minimization procedure. Further, it generalizes in a straightforward manner to multi-class problems. We present results on two standard tasks, namely Reuters-21578 and WebKB, showing that the proposed algorithm significantly outperforms the state-of-the-art.",
"In this work, we propose a graph-based approach to computing similarities between words in an unsupervised manner, and take advantage of heterogeneous feature types in the process. The approach is based on the creation of two separate graphs, one for words and one for features of different types (alignmentbased, orthographic, etc.). The graphs are connected through edges that link nodes in the feature graph to nodes in the word graph, the edge weights representing the importance of a particular feature for a particular word. High quality graphs are learned during training, and the proposed method outperforms experimental baselines.",
"We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents.",
"",
"We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language. Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages. We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (Berg-, 2010). Across eight European languages, our approach results in an average absolute improvement of 10.4 over a state-of-the-art baseline, and 16.7 over vanilla hidden Markov models induced with the Expectation Maximization algorithm.",
"We describe a new scalable algorithm for semi-supervised training of conditional random fields (CRF) and its application to part-of-speech (POS) tagging. The algorithm uses a similarity graph to encourage similar n-grams to have similar POS tags. We demonstrate the efficacy of our approach on a domain adaptation task, where we assume that we have access to large amounts of unlabeled data from the target domain, but no additional labeled data. The similarity graph is used during training to smooth the state posteriors on the target domain. Standard inference can be used at test time. Our approach is able to scale to very large problems and yields significantly improved target domain accuracy."
]
} |
1512.05030 | 2952753782 | Morpho-syntactic lexicons provide information about the morphological and syntactic roles of words in a language. Such lexicons are not available for all languages and even when available, their coverage can be limited. We present a graph-based semi-supervised learning method that uses the morphological, syntactic and semantic relations between words to automatically construct wide coverage lexicons from small seed sets. Our method is language-independent, and we show that we can expand a 1000 word seed lexicon to more than 100 times its size with high quality for 11 languages. In addition, the automatically created lexicons provide features that improve performance in two downstream tasks: morphological tagging and dependency parsing. | High morphological complexity exacerbates the problem of feature sparsity in many NLP applications and leads to poor estimation of model parameters, emphasizing the need of morphological analysis. Morphological analysis encompasses fields like morphological segmentation @cite_39 @cite_11 @cite_54 @cite_66 @cite_13 , and inflection generation @cite_20 @cite_58 . Such models of segmentation and inflection generation are used to better understand the meaning and relations between words. Our task is complementary to the task of morphological paradigm generation. Paradigm generation requires generating all possible morphological forms of a given base-form according to different linguistic transformations @cite_27 @cite_24 @cite_35 @cite_73 @cite_81 @cite_0 , whereas our task requires identifying linguistic transformations between two different word forms. | {
"cite_N": [
"@cite_35",
"@cite_73",
"@cite_54",
"@cite_39",
"@cite_24",
"@cite_0",
"@cite_27",
"@cite_81",
"@cite_58",
"@cite_13",
"@cite_66",
"@cite_20",
"@cite_11"
],
"mid": [
"2251652123",
"",
"2116211107",
"2053306448",
"1642393990",
"",
"2103170111",
"",
"2007976151",
"1839584883",
"1975638594",
"2047603832",
""
],
"abstract": [
"We present a semi-supervised approach to the problem of paradigm induction from inflection tables. Our system extracts generalizations from inflection tables, representing the resulting paradigms in an abstract form. The process is intended to be language-independent, and to provide human-readable generalizations of paradigms. The tools we provide can be used by linguists for the rapid creation of lexical resources. We evaluate the system through an inflection table reconstruction task using Wiktionary data for German, Spanish, and Finnish. With no additional corpus information available, the evaluation yields per word form accuracy scores on inflecting unseen base forms in different languages ranging from 87.81 (German nouns) to 99.52 (Spanish verbs); with additional unlabeled text corpora available for training the scores range from 91.81 (German nouns) to 99.58 (Spanish verbs). We separately evaluate the system in a simulated task of Swedish lexicon creation, and show that on the basis of a small number of inflection tables, the system can accurately collect from a list of noun forms a lexicon with inflection information ranging from 100.0 correct (collect 100 words), to 96.4 correct (collect 1000 words).",
"",
"For centuries, the deep connection between languages has brought about major discoveries about human communication. In this paper we investigate how this powerful source of information can be exploited for unsupervised language learning. In particular, we study the task of morphological segmentation of multiple languages. We present a nonparametric Bayesian model that jointly induces morpheme segmentations of each language under consideration and at the same time identifies cross-lingual morpheme patterns, or abstract morphemes. We apply our model to three Semitic languages: Arabic, Hebrew, Aramaic, as well as to English. Our results demonstrate that learning morphological models in tandem reduces error by up to 24 relative to monolingual models. Furthermore, we provide evidence that our joint model achieves better performance when applied to languages from the same family.",
"We present a model family called Morfessor for the unsupervised induction of a simple morphology from raw text data. The model is formulated in a probabilistic maximum a posteriori framework. Morfessor can handle highly inflecting and compounding languages where words can consist of lengthy sequences of morphemes. A lexicon of word segments, called morphs, is induced from the data. The lexicon stores information about both the usage and form of the morphs. Several instances of the model are evaluated quantitatively in a morpheme segmentation task on different sized sets of Finnish as well as English data. Morfessor is shown to perform very well compared to a widely known benchmark algorithm, in particular on Finnish data.",
"We describe a supervised approach to predicting the set of all inflected forms of a lexical item. Our system automatically acquires the orthographic transformation rules of morphological paradigms from labeled examples, and then learns the contexts in which those transformations apply using a discriminative sequence model. Because our approach is completely data-driven and the model is trained on examples extracted from Wiktionary, our method can extend to new languages without change. Our end-to-end system is able to predict complete paradigms with 86.1 accuracy and individual inflected forms with 94.9 accuracy, averaged across three languages and two parts of speech.",
"",
"We present an inference algorithm that organizes observed words (tokens) into structured inflectional paradigms (types). It also naturally predicts the spelling of unobserved forms that are missing from these paradigms, and discovers inflectional principles (grammar) that generalize to wholly unobserved words. Our Bayesian generative model of the data explicitly represents tokens, types, inflections, paradigms, and locally conditioned string edits. It assumes that inflected word tokens are generated from an infinite mixture of inflectional paradigms (string tuples). Each paradigm is sampled all at once from a graphical model, whose potential functions are weighted finite-state transducers with language-specific parameters to be learned. These assumptions naturally lead to an elegant empirical Bayes inference procedure that exploits Monte Carlo EM, belief propagation, and dynamic programming. Given 50--100 seed paradigms, adding a 10-million-word corpus reduces prediction error for morphological inflections by up to 10 .",
"",
"This paper presents the WordFrame model, a noise-robust supervised algorithm capable of inducing morphological analyses for languages which exhibit prefixation, suffixation, and internal vowel shifts. In combination with a naive approach to suffix-based morphology, this algorithm is shown to be remarkably effective across a broad range of languages, including those exhibiting infixation and partial reduplication. Results are presented for over 30 languages with a median accuracy of 97.5 on test sets including both regular and irregular verbal inflections. Because the proposed method trains extremely well under conditions of high noise, it is an ideal candidate for use in co-training with unsupervised algorithms.",
"Most state-of-the-art systems today produce morphological analysis based only on orthographic patterns. In contrast, we propose a model for unsupervised morphological analysis that integrates orthographic and semantic views of words. We model word formation in terms of morphological chains, from base words to the observed words, breaking the chains into parent-child relations. We use log-linear models with morpheme and word-level features to predict possible parents, including their modifications, for each word. The limited set of candidate parents for each word render contrastive estimation feasible. Our model consistently matches or outperforms five state-of-the-art systems on Arabic, English and Turkish.",
"Morphological segmentation breaks words into morphemes (the basic semantic units). It is a key component for natural language processing systems. Unsupervised morphological segmentation is attractive, because in every language there are virtually unlimited supplies of text, but very few labeled resources. However, most existing model-based systems for unsupervised morphological segmentation use directed generative models, making it difficult to leverage arbitrary overlapping features that are potentially helpful to learning. In this paper, we present the first log-linear model for unsupervised morphological segmentation. Our model uses overlapping features such as morphemes and their contexts, and incorporates exponential priors inspired by the minimum description length (MDL) principle. We present efficient algorithms for learning and inference by combining contrastive estimation with sampling. Our system, based on monolingual features only, outperforms a state-of-the-art system by a large margin, even when the latter uses bilingual information such as phrasal alignment and phonetic correspondence. On the Arabic Penn Treebank, our system reduces F1 error by 11 compared to Morfessor.",
"This paper presents a corpus-based algorithm capable of inducing inflectional morphological analyses of both regular and highly irregular forms (such as brought→bring) from distributional patterns in large monolingual text with no direct supervision. The algorithm combines four original alignment models based on relative corpus frequency, contextual similarity, weighted string similarity and incrementally retrained inflectional transduction probabilities. Starting with no paired examples for training and no prior seeding of legal morphological transformations, accuracy of the induced analyses of 3888 past-tense test cases in English exceeds 99.2 for the set, with currently over 80 accuracy on the most highly irregular forms and 99.7 accuracy on forms exhibiting non-concatenative suffixation.",
""
]
} |
1512.05030 | 2952753782 | Morpho-syntactic lexicons provide information about the morphological and syntactic roles of words in a language. Such lexicons are not available for all languages and even when available, their coverage can be limited. We present a graph-based semi-supervised learning method that uses the morphological, syntactic and semantic relations between words to automatically construct wide coverage lexicons from small seed sets. Our method is language-independent, and we show that we can expand a 1000 word seed lexicon to more than 100 times its size with high quality for 11 languages. In addition, the automatically created lexicons provide features that improve performance in two downstream tasks: morphological tagging and dependency parsing. | Our algorithm can be used to generate morpho-syntactic lexicons for low-resourced languages, where the seed lexicon can be constructed, for example, using crowdsourcing @cite_30 @cite_84 . Morpho-syntactic resources have been developed for east-european languages like Slovene @cite_28 @cite_7 , Bulgarian @cite_6 and highly agglutinative languages like Turkish @cite_50 . Morpho-syntactic lexicons are crucial components in acousting modeling and automatic speech recognition, where they have been developed for low-resourced languages @cite_62 @cite_36 . | {
"cite_N": [
"@cite_30",
"@cite_62",
"@cite_7",
"@cite_28",
"@cite_36",
"@cite_84",
"@cite_6",
"@cite_50"
],
"mid": [
"2127849236",
"71758931",
"2916021403",
"150652179",
"2091746061",
"1785101966",
"",
"1518332699"
],
"abstract": [
"In this paper we give an introduction to using Amazon's Mechanical Turk crowdsourcing platform for the purpose of collecting data for human language technologies. We survey the papers published in the NAACL-2010 Workshop. 24 researchers participated in the workshop's shared task to create data for speech and language applications with $100.",
"Texts generated by automatic speech recognition (ASR) systems have some specificities, related to the idiosyncrasies of oral productions or the principles of ASR systems, that make them more difficult to exploit than more conventional natural language written texts. This paper aims at studying the interest of morphosyntactic information as a useful resource for ASR. We show the ability of automatic methods to tag outputs of ASR systems, by obtaining a tag accuracy similar for automatic transcriptions to the 95-98 usually reported for written texts, such as newspapers. We also demonstrate experimentally that tagging is useful to improve the quality of transcriptions by using morphosyntactic information in a post-processing stage of speech decoding. Indeed, we obtain a significant decrease of the word error rate with experiments done on French broadcast news from the ESTER corpus; we also notice an improvement of the sentence error rate and observe that a significant number of agreement errors are corrected.",
"The paper presents the third edition of the MULTEXT-East language resources, a multilingual dataset for language engineering research and development. This standardised and linked set of resources covers a large number of mainly Central and Eastern European languages and includes the EAGLES-based morphosyntactic specifications, defining the features that describe word-level syntactic annotations; medium scale morphosyntactic lexica; and annotated parallel, comparable, and speech corpora. The most important component is the linguistically annotated corpus consisting of Orwell’s novel “1984” in the English original and translations. The resources are the results of several EU projects: MULTEXT-East (produced linked resources for Romanian, Slovene, Czech, Bulgarian, Estonian, Hungarian and English), TELRI (added resources for Lithuanian, Croatian, Serbian, and Russian; first release), and CONCEDE (validation, re-encoding; partial re-release). This paper presents the third release of the resources, which brings together the first two, makes them available in TEI P4 XML, and introduces further extensions, e.g., the specification for Resian, a dialect of Slovene. This dataset, unique in terms of languages and the wealth of encoding, is extensively documented, and freely available for research purposes. The paper presents the component resources, reviews some research undertaken on the basis of the first two editions, and discusses future plans.",
"The paper evaluates tagging techniques on a corpus of Slovene, where we are faced with a large number of possible word-class tags and only a small (hand-tagged) dataset. We report on training and testing of four different taggers on the Slovene MULTEXT-East corpus containing about 100.000 words and 1000 different morphosyntactic tags. Results show, first of all, that training times of the Maximum Entropy Tagger and the Rule Based Tagger are unacceptably long, while they are negligible for the Memory Based Taggers and the TnT tri-gram tagger. Results on a random split show that tagging accuracy varies between 86 and 89 overall, between 92 and 95 on known words and between 54 and 55 on unknown words. Best results are obtained by TnT. The paper also investigates performance in relation to our EAGLES-based morphosyntactic tagset. Here we compare the per-feature accuracy on the full tagset, and accuracies on these features when training on a reduced tagset. Results show that PoS accuracy is quite high, while accuracy on Case is lowest. Tagset reduction helps improve accuracy, but less than might be expected.",
"Speech processing for under-resourced languages is an active field of research, which has experienced significant progress during the past decade. We propose, in this paper, a survey that focuses on automatic speech recognition (ASR) for these languages. The definition of under-resourced languages and the challenges associated to them are first defined. The main part of the paper is a literature review of the recent (last 8years) contributions made in ASR for under-resourced languages. Examples of past projects and future trends when dealing with under-resourced languages are also presented. We believe that this paper will be a good starting point for anyone interested to initiate research in (or operational development of) ASR for one or several under-resourced languages. It should be clear, however, that many of the issues and approaches presented here, apply to speech technology in general (text-to-speech synthesis for instance).",
"In this work we present results from using Amazon's Mechanical Turk (MTurk) to annotate translation lexicons between English and a large set of less commonly used languages. We generate candidate translations for 100 English words in each of 42 foreign languages using Wikipedia and a lexicon induction framework. We evaluate the MTurk annotations by using positive and negative control candidate translations. Additionally, we evaluate the annotations by adding pairs to our seed dictionaries, providing a feedback loop into the induction system. MTurk workers are more successful in annotating some languages than others and are not evenly distributed around the world or among the world's languages. However, in general, we find that MTurk is a valuable resource for gathering cheap and simple annotations for most of the languages that we explored, and these annotations provide useful feedback in building a larger, more accurate lexicon.",
"",
"In this paper, we propose a set of language resources for building Turkish language processing applications. Specifically, we present a finite-state implementation of a morphological parser, an averaged perceptron-based morphological disambiguator, and compilation of a web corpus. Turkish is an agglutinative language with a highly productive inflectional and derivational morphology. We present an implementation of a morphological parser based on two-level morphology. This parser is one of the most complete parsers for Turkish and it runs independent of any other external system such as PC-KIMMO in contrast to existing parsers. Due to complex phonology and morphology of Turkish, parsing introduces some ambiguous parses. We developed a morphological disambiguator with accuracy of about 98 using averaged perceptron algorithm. We also present our efforts to build a Turkish web corpus of about 423 million words."
]
} |
1512.05030 | 2952753782 | Morpho-syntactic lexicons provide information about the morphological and syntactic roles of words in a language. Such lexicons are not available for all languages and even when available, their coverage can be limited. We present a graph-based semi-supervised learning method that uses the morphological, syntactic and semantic relations between words to automatically construct wide coverage lexicons from small seed sets. Our method is language-independent, and we show that we can expand a 1000 word seed lexicon to more than 100 times its size with high quality for 11 languages. In addition, the automatically created lexicons provide features that improve performance in two downstream tasks: morphological tagging and dependency parsing. | One alternative method to extract morphosyntactic lexicons is via parallel data @cite_16 . However, such methods assume that both the source and target langauges are isomorphic with respect to morphology. This can be the case with attributes like coarse part-of-speech or case, but is rarely true for other attributes like gender, which is very language specific. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2142523187"
],
"abstract": [
"We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language. Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages. We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (Berg-, 2010). Across eight European languages, our approach results in an average absolute improvement of 10.4 over a state-of-the-art baseline, and 16.7 over vanilla hidden Markov models induced with the Expectation Maximization algorithm."
]
} |
1512.04828 | 2213338267 | In the era of mobile computing, understanding human mobility patterns is crucial in order to better design protocols and applications. Many studies focus on different aspects of human mobility such as people's points of interests, routes, traffic, individual mobility patterns, among others. In this work, we propose to look at human mobility through a social perspective, i.e., analyze the impact of social groups in mobility patterns. We use the MIT Reality Mining proximity trace to detect, track and investigate group's evolution throughout time. Our results show that group meetings happen in a periodical fashion and present daily and weekly periodicity. We analyze how groups' dynamics change over day hours and find that group meetings lasting longer are those with less changes in members composition and with members having stronger social bonds with each other. Our findings can be used to propose meeting prediction algorithms, opportunistic routing and information diffusion protocols, taking advantage of those revealed properties. | To perform group characterization from mobility traces' analysis, we apply community detection methods. Since its introduction, community detection in complex networks have attracted a lot of attention. Algorithms for community detection can be classified according to two characteristics: overlapping versus non-overlapping, and static versus dynamic graphs. Among many proposed algorithms, the studies in @cite_6 and @cite_13 have remarked themselves as the most popular and effective algorithms for community detection in static graphs. Studies such as @cite_0 aim to propose adaptations and new algorithms that are suited for dynamics graphs, considering computational efficiency issues. In our study, we use these developed methodologies, specifically @cite_6 , to detect and characterize social groups dynamics looking at proximity traces. | {
"cite_N": [
"@cite_0",
"@cite_13",
"@cite_6"
],
"mid": [
"2110724311",
"2037096232",
""
],
"abstract": [
"Many practical problems on Mobile networking, such as routing strategies in MANETs, sensor reprogramming in WSNs and worm containment in online social networks (OSNs) share an ubiquitous, yet interesting feature in their organizations: community structure. Knowledge of this structure provides us not only crucial information about the network principles, but also key insights into designing more effective algorithms for practical problems enabled by Mobile networking. However, understanding this interesting feature is extremely challenging on dynamic networks where changes to their topologies are frequently introduced, and especially when network communities in reality usually overlap with each other. We focus on the following questions (1) Can we effectively detect the overlapping community structure in a dynamic network? (2) Can we quickly and adaptively update the network structure only based on its history without recomputing from scratch? (3) How does the detection of network communities help mobile applications? We propose AFOCS, a two-phase framework for not only detecting quickly but also tracing effectively the evolution of overlapped network communities in dynamic mobile networks. With the great advantages of the overlapping community structure, AFOCS significantly helps in reducing up to 7 times the infection rates in worm containment on OSNs, and up to 11 times overhead while maintaining good delivery time and ratio in forwarding strategies in MANETs.",
"We propose an algorithm for finding overlapping community structure in very large networks. The algorithm is based on the label propagation technique of Raghavan, Albert and Kumara, but is able to detect communities that overlap. Like the original algorithm, vertices have labels that propagate between neighbouring vertices so that members of a community reach a consensus on their community membership. Our main contribution is to extend the label and propagation step to include information about more than one community: each vertex can now belong to up to v communities, where v is the parameter of the algorithm. Our algorithm can also handle weighted and bipartite networks. Tests on an independently designed set of benchmarks, and on real networks, show the algorithm to be highly effective in recovering overlapping communities. It is also very fast and can process very large and dense networks in a short time.",
""
]
} |
1512.04412 | 2949295283 | Semantic segmentation research has recently witnessed rapid progress, but many leading methods are unable to identify object instances. In this paper, we present Multi-task Network Cascades for instance-aware semantic segmentation. Our model consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects. These networks form a cascaded structure, and are designed to share their convolutional features. We develop an algorithm for the nontrivial end-to-end training of this causal, cascaded structure. Our solution is a clean, single-step training framework and can be generalized to cascades that have more stages. We demonstrate state-of-the-art instance-aware semantic segmentation accuracy on PASCAL VOC. Meanwhile, our method takes only 360ms testing an image using VGG-16, which is two orders of magnitude faster than previous systems for this challenging problem. As a by product, our method also achieves compelling object detection results which surpass the competitive Fast Faster R-CNN systems. The method described in this paper is the foundation of our submissions to the MS COCO 2015 segmentation competition, where we won the 1st place. | methods @cite_14 @cite_21 @cite_3 @cite_11 involve predicting object bounding boxes and categories. The work of R-CNN @cite_14 adopts region proposal methods ( , @cite_4 @cite_23 ) for producing multiple instance proposals, which are used for CNN-based classification. In SPPnet @cite_21 and Fast R-CNN @cite_3 , the convolutional layers of CNNs are shared on the entire image for fast computation. Faster R-CNN @cite_11 exploits the shared convolutional features to extract region proposals used by the detector. Sharing convolutional features leads to substantially faster speed for object detection systems @cite_21 @cite_3 @cite_11 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_21",
"@cite_3",
"@cite_23",
"@cite_11"
],
"mid": [
"2102605133",
"2088049833",
"2179352600",
"",
"7746136",
"2953106684"
],
"abstract": [
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).",
"Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.",
"",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."
]
} |
1512.04412 | 2949295283 | Semantic segmentation research has recently witnessed rapid progress, but many leading methods are unable to identify object instances. In this paper, we present Multi-task Network Cascades for instance-aware semantic segmentation. Our model consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects. These networks form a cascaded structure, and are designed to share their convolutional features. We develop an algorithm for the nontrivial end-to-end training of this causal, cascaded structure. Our solution is a clean, single-step training framework and can be generalized to cascades that have more stages. We demonstrate state-of-the-art instance-aware semantic segmentation accuracy on PASCAL VOC. Meanwhile, our method takes only 360ms testing an image using VGG-16, which is two orders of magnitude faster than previous systems for this challenging problem. As a by product, our method also achieves compelling object detection results which surpass the competitive Fast Faster R-CNN systems. The method described in this paper is the foundation of our submissions to the MS COCO 2015 segmentation competition, where we won the 1st place. | Using mask-level region proposals, can be addressed based on the R-CNN philosophy, as in R-CNN @cite_14 , SDS @cite_27 , and Hypercolumn @cite_25 . Sharing convolutional features among mask-level proposals is enabled by using masking layers @cite_31 . All these methods @cite_14 @cite_27 @cite_25 @cite_31 rely on computationally expensive mask proposal methods. For example, the widely used MCG @cite_2 takes 30 seconds processing an image, which becomes a bottleneck at inference time. DeepMask @cite_28 is recently developed for learning segmentation candidates using convolutional networks, taking over 1 second per image. Its accuracy for instance-aware semantic segmentation is yet to be evaluated. | {
"cite_N": [
"@cite_14",
"@cite_28",
"@cite_27",
"@cite_2",
"@cite_31",
"@cite_25"
],
"mid": [
"2102605133",
"809122546",
"",
"1991367009",
"1923115158",
"1948751323"
],
"abstract": [
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-the-art object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation.",
"",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.",
"The topic of semantic segmentation has witnessed considerable progress due to the powerful features learned by convolutional neural networks (CNNs) [13]. The current leading approaches for semantic segmentation exploit shape information by extracting CNN features from masked image regions. This strategy introduces artificial boundaries on the images and may impact the quality of the extracted features. Besides, the operations on the raw image domain require to compute thousands of networks on a single image, which is time-consuming. In this paper, we propose to exploit shape information via masking convolutional features. The proposal segments (e.g., super-pixels) are treated as masks on the convolutional feature maps. The CNN features of segments are directly masked out from these maps and used to train classifiers for recognition. We further propose a joint method to handle objects and “stuff” (e.g., grass, sky, water) in the same framework. State-of-the-art results are demonstrated on benchmarks of PASCAL VOC and new PASCAL-CONTEXT, with a compelling computational speed.",
"Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline."
]
} |
1512.04412 | 2949295283 | Semantic segmentation research has recently witnessed rapid progress, but many leading methods are unable to identify object instances. In this paper, we present Multi-task Network Cascades for instance-aware semantic segmentation. Our model consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects. These networks form a cascaded structure, and are designed to share their convolutional features. We develop an algorithm for the nontrivial end-to-end training of this causal, cascaded structure. Our solution is a clean, single-step training framework and can be generalized to cascades that have more stages. We demonstrate state-of-the-art instance-aware semantic segmentation accuracy on PASCAL VOC. Meanwhile, our method takes only 360ms testing an image using VGG-16, which is two orders of magnitude faster than previous systems for this challenging problem. As a by product, our method also achieves compelling object detection results which surpass the competitive Fast Faster R-CNN systems. The method described in this paper is the foundation of our submissions to the MS COCO 2015 segmentation competition, where we won the 1st place. | Category-wise is elegantly tackled by end-to-end training FCNs @cite_5 . The output of an FCN consists of multiple score maps, each of which is for one category. This formulation enables per-pixel regression in a fully-convolutional form, but is not able to distinguish instances of the same category. The FCN framework has been further improved in many papers ( , @cite_9 @cite_30 ), but these methods also have the limitations of not being able to predict instances. | {
"cite_N": [
"@cite_30",
"@cite_5",
"@cite_9"
],
"mid": [
"",
"2952632681",
"1923697677"
],
"abstract": [
"",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU."
]
} |
1512.04418 | 2271389234 | Blind image restoration is a non-convex problem which involves restoration of images from an unknown blur kernel. The factors affecting the performance of this restoration are how much prior information about an image and a blur kernel are provided and what algorithm is used to perform the restoration task. Prior information on images is often employed to restore the sharpness of the edges of an image. By contrast, no consensus is still present regarding what prior information to use in restoring from a blur kernel due to complex image blurring processes. In this paper, we propose modelling of a blur kernel as a sparse linear combinations of basic 2-D patterns. Our approach has a competitive edge over the existing blur kernel modelling methods because our method has the flexibility to customize the dictionary design, which makes it well-adaptive to a variety of applications. As a demonstration, we construct a dictionary formed by basic patterns derived from the Kronecker product of Gaussian sequences. We also compare our results with those derived by other state-of-the-art methods, in terms of peak signal to noise ratio (PSNR). | Among all different forms of regularizations on images, the total variation (TV) regularization function and its variations have been widely used @cite_16 @cite_17 @cite_21 @cite_39 @cite_11 @cite_34 @cite_31 @cite_27 @cite_6 @cite_35 @cite_26 @cite_30 @cite_32 . Statistical models on the gradients of natural images have been adopted for image prior @cite_36 @cite_45 . Sparse assumptions have been used to model representation coefficients for natural images in a transform domain @cite_38 @cite_20 or in an image domain @cite_4 . A counter-intuitive finding is reported in @cite_36 indicating that most cost functions for image prior prefer blurry images to sharp images, as blurry images have lower costs than sharp images. Attempt is made in @cite_1 to achieve the lowest cost for true sharp image by introducing an @math function on images. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_36",
"@cite_20",
"@cite_38",
"@cite_4",
"@cite_21",
"@cite_39",
"@cite_17",
"@cite_26",
"@cite_32",
"@cite_6",
"@cite_27",
"@cite_34",
"@cite_16",
"@cite_1",
"@cite_45",
"@cite_31",
"@cite_11"
],
"mid": [
"",
"",
"2154571593",
"2540180983",
"2157434051",
"2164570415",
"",
"",
"",
"",
"",
"",
"",
"",
"1976730913",
"1987075379",
"2141115311",
"",
""
],
"abstract": [
"",
"",
"A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.",
"Digital images often suffer from point spreading or blurring from both known and unknown filters or point spread functions. The sources of degradation can be lens point spreading, misfocus, motion, and scattering in case of x-ray images or atmospheric turbulence. Therefore a digital image can suffer blurring from a single or an combination of various point spread functions, for example many images suffer from lens out of focus blur because of manufacturing limitations or satellite aerial images suffer from lens focus and atmospheric turbulence etc. The obvious requirement of an imaging system is to reproduce an image that is as close to original as possible. Most existing image restoration methods uses blind deconvolution and deblurring methods that require good knowledge about both the signal and the filter and the performance depends on the amount of prior information regarding the blurring function and the signal. Often an iterative procedure is required for estimating the blurring function such as Richardson-Lucy method and is computational complex and expensive and sometime instable. This paper presents a blind image restoration method based on techniques of blind signal separation (BSS) in combination with the genetic algorithm for parameters optimization. The method is not only simple but also requires little priori knowledge regarding the signal and the blurring function.",
"We address the problem of image deconvolution under I sub p norm (and other) penalties expressed in the wavelet domain. We propose an algorithm based on the bound optimization approach; this approach allows deriving EM-type algorithms without using the concept of missing hidden data. The algorithm has provable monotonicity both with orthogonal or redundant wavelet transforms. We also derive bounds on the l sub p norm penalties to obtain closed form update equations for any p spl isin [0, 2]. Experimental results show that the proposed method achieves state-of-the-art performance.",
"We propose a sparse representation based blind image deblurring method. The proposed method exploits the sparsity property of natural images, by assuming that the patches from the natural images can be sparsely represented by an over-complete dictionary. By incorporating this prior into the deblurring process, we can effectively regularize the ill-posed inverse problem and alleviate the undesirable ring effect which is usually suffered by conventional deblurring methods. Experimental results compared with state-of-the-art blind deblurring method demonstrate the effectiveness of the proposed method.",
"",
"",
"",
"",
"",
"",
"",
"",
"This paper presents a fast deblurring method that produces a deblurring result from a single image of moderate size in a few seconds. We accelerate both latent image estimation and kernel estimation in an iterative deblurring process by introducing a novel prediction step and working with image derivatives rather than pixel values. In the prediction step, we use simple image processing techniques to predict strong edges from an estimated latent image, which will be solely used for kernel estimation. With this approach, a computationally efficient Gaussian prior becomes sufficient for deconvolution to estimate the latent image, as small deconvolution artifacts can be suppressed in the prediction. For kernel estimation, we formulate the optimization function using image derivatives, and accelerate the numerical process by reducing the number of Fourier transforms needed for a conjugate gradient method. We also show that the formulation results in a smaller condition number of the numerical system than the use of pixel values, which gives faster convergence. Experimental results demonstrate that our method runs an order of magnitude faster than previous work, while the deblurring quality is comparable. GPU implementation facilitates further speed-up, making our method fast enough for practical use.",
"Blind image deconvolution is an ill-posed problem that requires regularization to solve. However, many common forms of image prior used in this setting have a major drawback in that the minimum of the resulting cost function does not correspond to the true sharp solution. Accordingly, a range of additional methods are needed to yield good results (Bayesian methods, adaptive cost functions, alpha-matte extraction and edge localization). In this paper we introduce a new type of image regularization which gives lowest cost for the true sharp image. This allows a very simple cost formulation to be used for the blind deconvolution model, obviating the need for additional methods. Due to its simplicity the algorithm is fast and very robust. We demonstrate our method on real images with both spatially invariant and spatially varying blur.",
"We present a new algorithm for removing motion blur from a single image. Our method computes a deblurred image using a unified probabilistic model of both blur kernel estimation and unblurred image restoration. We present an analysis of the causes of common artifacts found in current deblurring methods, and then introduce several novel terms within this probabilistic model that are inspired by our analysis. These terms include a model of the spatial randomness of noise in the blurred image, as well a new local smoothness prior that reduces ringing artifacts by constraining contrast in the unblurred image wherever the blurred image exhibits low contrast. Finally, we describe an effficient optimization scheme that alternates between blur kernel estimation and unblurred image restoration until convergence. As a result of these steps, we are able to produce high quality deblurred results in low computation time. We are even able to produce results of comparable quality to techniques that require additional input images beyond a single blurry photograph, and to methods that require additional hardware.",
"",
""
]
} |
1512.04418 | 2271389234 | Blind image restoration is a non-convex problem which involves restoration of images from an unknown blur kernel. The factors affecting the performance of this restoration are how much prior information about an image and a blur kernel are provided and what algorithm is used to perform the restoration task. Prior information on images is often employed to restore the sharpness of the edges of an image. By contrast, no consensus is still present regarding what prior information to use in restoring from a blur kernel due to complex image blurring processes. In this paper, we propose modelling of a blur kernel as a sparse linear combinations of basic 2-D patterns. Our approach has a competitive edge over the existing blur kernel modelling methods because our method has the flexibility to customize the dictionary design, which makes it well-adaptive to a variety of applications. As a demonstration, we construct a dictionary formed by basic patterns derived from the Kronecker product of Gaussian sequences. We also compare our results with those derived by other state-of-the-art methods, in terms of peak signal to noise ratio (PSNR). | The spatially variant cases are solved by dividing an image into blocks (of size depending on the supports of blur kernels) and then processing each block independently by a spatially invariant method. Computational complexity is one of the main concerns for the spatially variant methods, because fast Fourier transformation (FFT) can be applied only to spatially invariant cases but not to spatially variant ones. Some fast algorithms based on variable splitting techniques and proximal point methods have also been proposed @cite_24 @cite_20 . For example, the variable splitting technique is used in @cite_24 for the TV approach and in @cite_8 for statistical modelling of image gradients. Another topic of concern is the removal of boundary effects incurred from dividing an image into blocks @cite_28 such as the matting approach adopted in @cite_40 . | {
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_24",
"@cite_40",
"@cite_20"
],
"mid": [
"2147298660",
"2025900737",
"1978333359",
"1980257411",
"2540180983"
],
"abstract": [
"The heavy-tailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and super-resolution. These distributions are well modeled by a hyper-Laplacian (p(x) ∝ e-k|x|α ), typically with 0.5 ≤ α ≤ 0.8. However, the use of sparse distributions makes the problem non-convex and impractically slow to solve for multi-megapixel images. In this paper we describe a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyper-Laplacian priors. We adopt an alternating minimization scheme where one of the two phases is a non-convex problem that is separable over pixels. This per-pixel sub-problem may be solved with a lookup table (LUT). Alternatively, for two specific values of α, 1 2 and 2 3 an analytic solution can be found, by finding the roots of a cubic and quartic polynomial, respectively. Our approach (using either LUTs or analytic formulae) is able to deconvolve a 1 megapixel image in less than 3 seconds, achieving comparable quality to existing methods such as iteratively reweighted least squares (IRLS) that take 20 minutes. Furthermore, our method is quite general and can easily be extended to related image processing problems, beyond the deconvolution application demonstrated.",
"Most existing nonblind image deblurring methods assume that the blur kernel is free of error. However, it is often unavoidable in practice that the input blur kernel is erroneous to some extent. Sometimes, the error could be severe, e.g., for images degraded by nonuniform motion blurring. When an inaccurate blur kernel is used as the input, significant distortions will appear in the image recovered by existing methods. In this paper, we present a novel convex minimization model that explicitly takes account of error in the blur kernel. The resulting minimization problem can be efficiently solved by the so-called accelerated proximal gradient method. In addition, a new boundary extension scheme is incorporated in the proposed model to further improve the results. The experiments on both synthesized and real images showed the efficiency and robustness of our algorithm to both the image noise and the model error in the blur kernel.",
"We propose, analyze, and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new half-quadratic model applicable to not only the anisotropic but also the isotropic forms of TV discretizations. The per-iteration computational complexity of the algorithm is three fast Fourier transforms. We establish strong convergence properties for the algorithm including finite convergence for some variables and relatively fast exponential (or @math -linear in optimization terminology) convergence for the others. Furthermore, we propose a continuation scheme to accelerate the practical convergence of the algorithm. Extensive numerical results show that our algorithm performs favorably in comparison to several state-of-the-art algorithms. In particular, it runs orders of magnitude faster than the lagged diffusivity algorithm for TV-based deblurring. Some extensions of our algorithm are also discussed.",
"This paper addresses the problem of two-layer out-of-focus blur removal from a single image, in which either the foreground or the background is in focus while the other is out of focus. To recover details from the blurry parts, the existing blind deconvolution algorithms are insufficient as the problem is spatially variant. The proposed method exploits the invariant structure of the problem by first predicting the occluded background. Then a blind deconvolution algorithm is applied to estimate the blur kernel and a coarse estimate of the image is found as a side product. Finally, the blurred region is recovered using total variation minimization, and fused with the sharp region to produce the final deblurred image.",
"Digital images often suffer from point spreading or blurring from both known and unknown filters or point spread functions. The sources of degradation can be lens point spreading, misfocus, motion, and scattering in case of x-ray images or atmospheric turbulence. Therefore a digital image can suffer blurring from a single or an combination of various point spread functions, for example many images suffer from lens out of focus blur because of manufacturing limitations or satellite aerial images suffer from lens focus and atmospheric turbulence etc. The obvious requirement of an imaging system is to reproduce an image that is as close to original as possible. Most existing image restoration methods uses blind deconvolution and deblurring methods that require good knowledge about both the signal and the filter and the performance depends on the amount of prior information regarding the blurring function and the signal. Often an iterative procedure is required for estimating the blurring function such as Richardson-Lucy method and is computational complex and expensive and sometime instable. This paper presents a blind image restoration method based on techniques of blind signal separation (BSS) in combination with the genetic algorithm for parameters optimization. The method is not only simple but also requires little priori knowledge regarding the signal and the blurring function."
]
} |
1512.04605 | 1930638319 | One of the prevalent learning tasks involving images is content-based image classification. This is a difficult task especially because the low-level features used to digitally describe images usually capture little information about the semantics of the images. In this paper, we tackle this difficulty by enriching the semantic content of the image representation by using external knowledge. The underlying hypothesis of our work is that creating a more semantically rich representation for images would yield higher machine learning performances, without the need to modify the learning algorithms themselves. The external semantic information is presented under the form of non-positional image labels, therefore positioning our work in a weakly supervised context. Two approaches are proposed: the first one leverages the labels into the visual vocabulary construction algorithm, the result being dedicated visual vocabularies. The second approach adds a filtering phase as a pre-processing of the vocabulary construction. Known positive and known negative sets are constructed and features that are unlikely to be associated with the objects denoted by the labels are filtered. We apply our proposition to the task of content-based image classification and we show that semantically enriching the image representation yields higher classification performances than the baseline representation. | Over the past decades computer vision domain has seen a large interest from the research community. Its application are larger than image analysis and include augmented reality, robotic vision, gesture recognition Whatsoever, in the context of Internet-originating images, one of the prevailing task is content-based image classification. Some of the initial image classification systems used color histograms @cite_15 for image representation. Such a representation does not retain any information about the shapes of objects in images and obtains moderate results. Other systems @cite_7 @cite_24 @cite_44 @cite_34 rely on texture detection. Texture is characterized by the repetition of basic elements or . For stochastic textures, it is the identity of the textons, not their spatial arrangement, that matters. The orderless representation has imposed itself as the state-of-the-art in image representation, for classification and indexation purposes. The process of constructing the representation includes sampling the image ( in Figure ), describing each features using an appearance-based descriptor (), constructing a visual vocabulary () and describing images as histograms over the visual words (). | {
"cite_N": [
"@cite_7",
"@cite_24",
"@cite_44",
"@cite_15",
"@cite_34"
],
"mid": [
"2129739837",
"2119067550",
"1497592982",
"",
"2129976136"
],
"abstract": [
"A procedure is developed to extract numerical features which characterize the pore structure of reservoir rocks. The procedure is based on a set of descriptors which give a statistical description of porous media. These features are evaluated from digitized photomicrographs of reservoir rocks and they characterize the rock grain structure in term of (1) the linear dependency of grey tones in the photomicrograph image, (2) the degree of \"homogeneity\" of the image and (3) the angular variations of the image grey tone dependencies. On the basis of these textural features, a simple identification rule using piecewise linear discriminant functions is developed for categorizing the photomicrograph images. The procedure was applied to a set of 243 distinct images comprising 6 distinct rock categories. The coefficients of the discriminant functions were obtained using 143 training samples. The remaining (100) samples were then processed, each sample being assigned to one of 6 possible sandstone categories. Eighty-nine per cent of the test samples were correctly identified.",
"This paper introduces a texture representation suitable for recognizing images of textured surfaces under a wide range of transformations, including viewpoint changes and nonrigid deformations. At the feature extraction stage, a sparse set of affine-invariant local patches is extracted from the image. This spatial selection process permits the computation of characteristic scale and neighborhood shape for every texture element. The proposed texture representation is evaluated in retrieval and classification tasks using the entire Brodatz database and a collection of photographs of textured surfaces taken from different viewpoints.",
"A simple method for texture-based segmentation of colored images combining the Hotelling transform and an unsupervised neural network is presented. The method separates the image into n regions representing n possible existing textures. The Hotelling transform is used to improve the color contrast and achieve uncorrelated color components. Auto-contrast is then applied to get a better color spreading. Finally, a set of features extracted from the transformed image is submitted to a competitive neural network for training. Once finished, the neural network parameters are used to label the image regions according to their textures.",
"",
"We question the role that large scale filter banks have traditionally played in texture classification. It is demonstrated that textures can be classified using the joint distribution of intensity values over extremely compact neighborhoods (starting from as small as 3 spl times 3 pixels square), and that this outperforms classification using filter banks with large support. We develop a novel texton based representation, which is suited to modeling this joint neighborhood distribution for MRFs. The representation is learnt from training images, and then used to classify novel images (with unknown viewpoint and lighting) into texture classes. The power of the method is demonstrated by classifying over 2800 images of all 61 textures present in the Columbia-Utrecht database. The classification performance surpasses that of recent state-of-the-art filter bank based classifiers such as Leung & Malik, Cula & Dana, and Varma & Zisserman."
]
} |
1512.04605 | 1930638319 | One of the prevalent learning tasks involving images is content-based image classification. This is a difficult task especially because the low-level features used to digitally describe images usually capture little information about the semantics of the images. In this paper, we tackle this difficulty by enriching the semantic content of the image representation by using external knowledge. The underlying hypothesis of our work is that creating a more semantically rich representation for images would yield higher machine learning performances, without the need to modify the learning algorithms themselves. The external semantic information is presented under the form of non-positional image labels, therefore positioning our work in a weakly supervised context. Two approaches are proposed: the first one leverages the labels into the visual vocabulary construction algorithm, the result being dedicated visual vocabularies. The second approach adds a filtering phase as a pre-processing of the vocabulary construction. Known positive and known negative sets are constructed and features that are unlikely to be associated with the objects denoted by the labels are filtered. We apply our proposition to the task of content-based image classification and we show that semantically enriching the image representation yields higher classification performances than the baseline representation. | Image sampling for the representation is the process of deciding which regions of a given image should be numerically described. In Figure , it corresponds to of the construction of a numerical representation. The output of feature detection is a set of patches, identified by their locations in the image and their corresponding scales and orientations. Multiple sampling methods exist @cite_57 , including , and random or dense grid sampling. | {
"cite_N": [
"@cite_57"
],
"mid": [
"1787683252"
],
"abstract": [
"The past decade has seen the growing popularity of Bag of Features (BoF) approaches to many computer vision tasks, including image classification, video search, robot localization, and texture recognition. Part of the appeal is simplicity. BoF meth- ods are based on orderless collections of quantized local image descriptors; they discard spatial information and are therefore conceptually and computationally simpler than many alternative methods. Despite this, or perhaps because of this, BoF-based systems have set new performance standards on popular image classification benchmarks and have achieved scalability breakthroughs in image retrieval. This paper presents an introduction to BoF image representations, describes critical design choices, and surveys the BoF literature. Emphasis is placed on recent techniques that mitigate quantization errors, improve fea- ture detection, and speed up image retrieval. At the same time, unresolved issues and fundamental challenges are raised. Among the unresolved issues are determining the best techniques for sampling images, describing local image features, and evaluating system performance. Among the more fundamental challenges are how and whether BoF meth- ods can contribute to localizing objects in complex images, or to associating high-level semantics with natural images. This survey should be useful both for introducing new in- vestigators to the field and for providing existing researchers with a consolidated reference to related work."
]
} |
1512.04605 | 1930638319 | One of the prevalent learning tasks involving images is content-based image classification. This is a difficult task especially because the low-level features used to digitally describe images usually capture little information about the semantics of the images. In this paper, we tackle this difficulty by enriching the semantic content of the image representation by using external knowledge. The underlying hypothesis of our work is that creating a more semantically rich representation for images would yield higher machine learning performances, without the need to modify the learning algorithms themselves. The external semantic information is presented under the form of non-positional image labels, therefore positioning our work in a weakly supervised context. Two approaches are proposed: the first one leverages the labels into the visual vocabulary construction algorithm, the result being dedicated visual vocabularies. The second approach adds a filtering phase as a pre-processing of the vocabulary construction. Known positive and known negative sets are constructed and features that are unlikely to be associated with the objects denoted by the labels are filtered. We apply our proposition to the task of content-based image classification and we show that semantically enriching the image representation yields higher classification performances than the baseline representation. | With the image sampled and a set of patches extracted, the next questions is how to numerically represent the neighborhood of pixels near a localized region. In Figure , this corresponds to of the construction of a numerical representation. Initial feature descriptors simply used the pixel intensity values, scaled for the size of the region. The have been shown @cite_54 to be outperformed by more sophisticated feature descriptors, such as the SIFT descriptor. The (Scale Invariant Feature Transform) @cite_5 descriptor is today's most widely used descriptor. The responses to 8 gradient orientations at each of 16 cells of a 4x4 grid generate the 128 components of the description vector. Alternative have been proposed, such as the (Speeded Up Robust Features) @cite_51 descriptor. The SURF algorithm contains both feature detection and description. It is designed to speed up the process of creating features similar to those produced by a SIFT descriptor on Hessian-Laplace interest points by using efficient approximations. | {
"cite_N": [
"@cite_5",
"@cite_54",
"@cite_51"
],
"mid": [
"2151103935",
"2107034620",
"1677409904"
],
"abstract": [
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"We propose a novel approach to learn and recognize natural scene categories. Unlike previous work, it does not require experts to annotate the training set. We represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning. Each region is represented as part of a \"theme\". In previous work, such themes were learnt from hand-annotations of experts, while our method learns the theme distributions as well as the codewords distribution over the themes without supervision. We report satisfactory categorization performances on a large set of 13 categories of complex scenes.",
"In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance."
]
} |
1512.04605 | 1930638319 | One of the prevalent learning tasks involving images is content-based image classification. This is a difficult task especially because the low-level features used to digitally describe images usually capture little information about the semantics of the images. In this paper, we tackle this difficulty by enriching the semantic content of the image representation by using external knowledge. The underlying hypothesis of our work is that creating a more semantically rich representation for images would yield higher machine learning performances, without the need to modify the learning algorithms themselves. The external semantic information is presented under the form of non-positional image labels, therefore positioning our work in a weakly supervised context. Two approaches are proposed: the first one leverages the labels into the visual vocabulary construction algorithm, the result being dedicated visual vocabularies. The second approach adds a filtering phase as a pre-processing of the vocabulary construction. Known positive and known negative sets are constructed and features that are unlikely to be associated with the objects denoted by the labels are filtered. We apply our proposition to the task of content-based image classification and we show that semantically enriching the image representation yields higher classification performances than the baseline representation. | The visual vocabulary is used to reduce dimensionality and to create a fixed length numerical representation for all images The number of extracted features can greatly vary depending on the image and the method used for sampling. . Most approaches use clustering to created the visual vocabulary, usually the @cite_12 @cite_30 @cite_6 algorithm. is used for the fact that it produces centroids, which are prototypes of similar features in the same cluster. Its linear execution time is a plus considering the high volume of individuals to be processed @cite_32 . Some authors @cite_13 argument that in , centroids are attracted by dense regions and under-represent less denser, but equally informative regions. Therefore, methods were proposed for allocating centers more uniformly, inspired by mean shift @cite_9 and on-line facility location @cite_4 . Other visual vocabulary construction techniques do not rely on . For example, @cite_47 use an , an ensemble of randomly created clustering trees. This technique provides good resistance to background clutter, but the main advantage over is the faster training time. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_9",
"@cite_32",
"@cite_6",
"@cite_47",
"@cite_13",
"@cite_12"
],
"mid": [
"2162915993",
"2151242668",
"2067191022",
"2119329316",
"2131846894",
"2104170135",
"2151259137",
"1986482242"
],
"abstract": [
"This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralbas \"gist\" and Lowes SIFT descriptors.",
"We consider the online variant of facility location, in which demand points arrive one at a time and we must maintain a set of facilities to service these points. We provide a randomized online O(1)-competitive algorithm in the case where points arrive in random order. If points are ordered adversarially, we show that no algorithm can be constant-competitive, and provide an O(log n)-competitive algorithm. Our algorithms are randomized and the analysis depends heavily on the concept of expected waiting time. We also combine our techniques with those of M. Charikar and S. Guha (1999) to provide a linear-time constant approximation for the offline facility location problem.",
"A general non-parametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure: the mean shift. For discrete data, we prove the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density. The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators; of location is also established. Algorithms for two low-level vision tasks discontinuity-preserving smoothing and image segmentation - are described as applications. In these algorithms, the only user-set parameter is the resolution of the analysis, and either gray-level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.",
"We are concerned by the use of factorial correspondence analysis (FCA) for image retrieval. FCA is designed for analyzing contingency tables. In textual data analysis (TDA), FCA analyzes a contingency table crossing terms words and documents. To adapt FCA on images, we first define \"visual words\" computed from scalable invariant feature transform (SIFT) descriptors in images and use them for image quantization. At this step, we can build a contingency table crossing \"visual words\" as terms words and images as documents. The method was tested on the Caltech4 and Stewenius and Nister datasets on which it provides better results (quality of results and execution time) than classical methods as tf * idf and probabilistic latent semantic analysis (PLSA). To scale up and improve the retrieval quality, we propose a new retrieval schema using inverted files based on the relevant indicators of correspondence analysis (representation quality of images on axes and contribution of images to the inertia of the axes). The numerical experiments show that our algorithm performs faster than the exhaustive method without losing precision.",
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.",
"Some of the most effective recent methods for content-based image classification work by extracting dense or sparse local image descriptors, quantizing them according to a coding rule such as k-means vector quantization, accumulating histograms of the resulting \"visual word\" codes over the image, and classifying these with a conventional classifier such as an SVM. Large numbers of descriptors and large codebooks are needed for good results and this becomes slow using k-means. We introduce Extremely Randomized Clustering Forests - ensembles of randomly created clustering trees - and show that these provide more accurate results, much faster training and testing and good resistance to background clutter in several state-of-the-art image classification tasks.",
"Visual codebook based quantization of robust appearance descriptors extracted from local image patches is an effective means of capturing image statistics for texture analysis and scene classification. Codebooks are usually constructed by using a method such as k-means to cluster the descriptor vectors of patches sampled either densely ('textons') or sparsely ('bags of features' based on key-points or salience measures) from a set of training images. This works well for texture analysis in homogeneous images, but the images that arise in natural object recognition tasks have far less uniform statistics. We show that for dense sampling, k-means over-adapts to this, clustering centres almost exclusively around the densest few regions in descriptor space and thus failing to code other informative regions. This gives suboptimal codes that are no better than using randomly selected centres. We describe a scalable acceptance-radius based clusterer that generates better codebooks and study its performance on several image classification tasks. We also show that dense representations outperform equivalent keypoint based ones on these tasks and that SVM or mutual information based feature selection starting from a dense codebook further improves the performance.",
"Bag-of-features (BoF) deriving from local keypoints has recently appeared promising for object and scene classification. Whether BoF can naturally survive the challenges such as reliability and scalability of visual classification, nevertheless, remains uncertain due to various implementation choices. In this paper, we evaluate various factors which govern the performance of BoF. The factors include the choices of detector, kernel, vocabulary size and weighting scheme. We offer some practical insights in how to optimize the performance by choosing good keypoint detector and kernel. For the weighting scheme, we propose a novel soft-weighting method to assess the significance of a visual word to an image. We experimentally show that the proposed soft-weighting scheme can consistently offer better performance than other popular weighting methods. On both PASCAL-2005 and TRECVID-2006 datasets, our BoF setting generates competitive performance compared to the state-of-the-art techniques. We also show that the BoF is highly complementary to global features. By incorporating the BoF with color and texture features, an improvement of 50 is reported on TRECVID-2006 dataset."
]
} |
1512.04605 | 1930638319 | One of the prevalent learning tasks involving images is content-based image classification. This is a difficult task especially because the low-level features used to digitally describe images usually capture little information about the semantics of the images. In this paper, we tackle this difficulty by enriching the semantic content of the image representation by using external knowledge. The underlying hypothesis of our work is that creating a more semantically rich representation for images would yield higher machine learning performances, without the need to modify the learning algorithms themselves. The external semantic information is presented under the form of non-positional image labels, therefore positioning our work in a weakly supervised context. Two approaches are proposed: the first one leverages the labels into the visual vocabulary construction algorithm, the result being dedicated visual vocabularies. The second approach adds a filtering phase as a pre-processing of the vocabulary construction. Known positive and known negative sets are constructed and features that are unlikely to be associated with the objects denoted by the labels are filtered. We apply our proposition to the task of content-based image classification and we show that semantically enriching the image representation yields higher classification performances than the baseline representation. | One of the most important parameters in the construction of the visual vocabulary is its dimension, which has a powerful impact on both performance and computational complexity @cite_53 @cite_13 . It has been shown @cite_12 @cite_1 @cite_8 that a large vocabulary may lead to overfitting for construction techniques based on interest points detection. As our experiments show (in ), even a random vocabulary (in a random vocabulary, a number of features are randomly chosen to serve as visual words) can lead to overfitting if its dimension is high enough. | {
"cite_N": [
"@cite_8",
"@cite_53",
"@cite_1",
"@cite_13",
"@cite_12"
],
"mid": [
"2171896402",
"1625255723",
"1969681764",
"2151259137",
"1986482242"
],
"abstract": [
"Bag-of-features representations have recently become popular for content based image classification owing to their simplicity and good performance. They evolved from texton methods in texture analysis. The basic idea is to treat images as loose collections of independent patches, sampling a representative set of patches from the image, evaluating a visual descriptor vector for each patch independently, and using the resulting distribution of samples in descriptor space as a characterization of the image. The four main implementation choices are thus how to sample patches, how to describe them, how to characterize the resulting distributions and how to classify images based on the result. We concentrate on the first issue, showing experimentally that for a representative selection of commonly used test databases and for moderate to large numbers of samples, random sampling gives equal or better classifiers than the sophisticated multiscale interest operators that are in common use. Although interest operators work well for small numbers of samples, the single most important factor governing performance is the number of patches sampled from the test image and ultimately interest operators can not provide enough patches to compete. We also study the influence of other factors including codebook size and creation method, histogram normalization method and minimum scale for feature extraction.",
"We present a novel method for generic visual categorization: the problem of identifying the object content of natural images while generalizing across variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of image patches. We propose and compare two alternative implementations using different classifiers: Naive Bayes and SVM. The main advantages of the method are that it is simple, computationally efficient and intrinsically invariant. We present results for simultaneously classifying seven semantic visual categories. These results clearly demonstrate that the method is robust to background clutter and produces good categorization accuracy even without exploiting geometric information.",
"We present a novel method for constructing a visual vocabulary that takes into account the class labels of images, thus resulting in better recognition performance and more efficient learning. Our method consists of two stages: Cluster Precision Maximisation (CPM) and Adaptive Refinement. In the first stage, a Reciprocal Nearest Neighbours (RNN) clustering algorithm is guided towards class representative visual words by maximising a new cluster precision criterion. As we are able to optimise the vocabulary without the need for expensive cross-validation, the overall training time is significantly reduced without a negative impact on the results. Next, an adaptive threshold refinement scheme is proposed with the aim of increasing vocabulary compactness while at the same time improving the recognition rate and further increasing the representativeness of the visual words for category-level object recognition. This is a correlation clustering based approach, which works as a meta-clustering and optimises the cut-off threshold for each cluster separately. In the experiments we analyse the recognition rate of different vocabularies for a subset of the Caltech 101 dataset, showing how RNN in combination with CPM selects the optimal codebooks, and how the clustering refinement step succeeds in further increasing the recognition rate.",
"Visual codebook based quantization of robust appearance descriptors extracted from local image patches is an effective means of capturing image statistics for texture analysis and scene classification. Codebooks are usually constructed by using a method such as k-means to cluster the descriptor vectors of patches sampled either densely ('textons') or sparsely ('bags of features' based on key-points or salience measures) from a set of training images. This works well for texture analysis in homogeneous images, but the images that arise in natural object recognition tasks have far less uniform statistics. We show that for dense sampling, k-means over-adapts to this, clustering centres almost exclusively around the densest few regions in descriptor space and thus failing to code other informative regions. This gives suboptimal codes that are no better than using randomly selected centres. We describe a scalable acceptance-radius based clusterer that generates better codebooks and study its performance on several image classification tasks. We also show that dense representations outperform equivalent keypoint based ones on these tasks and that SVM or mutual information based feature selection starting from a dense codebook further improves the performance.",
"Bag-of-features (BoF) deriving from local keypoints has recently appeared promising for object and scene classification. Whether BoF can naturally survive the challenges such as reliability and scalability of visual classification, nevertheless, remains uncertain due to various implementation choices. In this paper, we evaluate various factors which govern the performance of BoF. The factors include the choices of detector, kernel, vocabulary size and weighting scheme. We offer some practical insights in how to optimize the performance by choosing good keypoint detector and kernel. For the weighting scheme, we propose a novel soft-weighting method to assess the significance of a visual word to an image. We experimentally show that the proposed soft-weighting scheme can consistently offer better performance than other popular weighting methods. On both PASCAL-2005 and TRECVID-2006 datasets, our BoF setting generates competitive performance compared to the state-of-the-art techniques. We also show that the BoF is highly complementary to global features. By incorporating the BoF with color and texture features, an improvement of 50 is reported on TRECVID-2006 dataset."
]
} |
1512.04605 | 1930638319 | One of the prevalent learning tasks involving images is content-based image classification. This is a difficult task especially because the low-level features used to digitally describe images usually capture little information about the semantics of the images. In this paper, we tackle this difficulty by enriching the semantic content of the image representation by using external knowledge. The underlying hypothesis of our work is that creating a more semantically rich representation for images would yield higher machine learning performances, without the need to modify the learning algorithms themselves. The external semantic information is presented under the form of non-positional image labels, therefore positioning our work in a weakly supervised context. Two approaches are proposed: the first one leverages the labels into the visual vocabulary construction algorithm, the result being dedicated visual vocabularies. The second approach adds a filtering phase as a pre-processing of the vocabulary construction. Known positive and known negative sets are constructed and features that are unlikely to be associated with the objects denoted by the labels are filtered. We apply our proposition to the task of content-based image classification and we show that semantically enriching the image representation yields higher classification performances than the baseline representation. | Other solutions rely on external expert knowledge in order to guide the visual vocabulary construction. This knowledge is most often expressed under the form of class category annotations or labels ( signaling the presence of an object inside an image), or semantic resources, such as WordNet @cite_16 . An iterative boosting-like approach is used in @cite_52 . Each iteration of boosting begins by learning a visual vocabulary according to the weights assigned by the previous boosting iteration. The resulting visual vocabulary is then applied to encode the training examples, a new classifier is learned and new weights are computed. The visual vocabulary is learned by clustering using a learning'' subset of image features. Features from images with high weights have more chances of being part of the learning subset. To classify a new example, the AdaBoost @cite_55 weighted voting scheme is used. | {
"cite_N": [
"@cite_55",
"@cite_16",
"@cite_52"
],
"mid": [
"1988790447",
"2081580037",
"1986560547"
],
"abstract": [
"In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.",
"Because meaningful sentences are composed of meaningful words, any system that hopes to process natural languages as people do must have information about words and their meanings. This information is traditionally provided through dictionaries, and machine-readable dictionaries are now widely available. But dictionary entries evolved for the convenience of human readers, not for machines. WordNet 1 provides a more effective combination of traditional lexicographic information and modern computing. WordNet is an online lexical database designed for use under program control. English nouns, verbs, adjectives, and adverbs are organized into sets of synonyms, each representing a lexicalized concept. Semantic relations link the synonym sets [4].",
"Codebook-based representations are widely employed in the classification of complex objects such as images and documents. Most previous codebook-based methods construct a single codebook via clustering that maps a bag of low-level features into a fixed-length histogram that describes the distribution of these features. This paper describes a simple yet effective framework for learning multiple non-redundant codebooks that produces surprisingly good results. In this framework, each codebook is learned in sequence to extract discriminative information that was not captured by preceding codebooks and their corresponding classifiers. We apply this framework to two application domains: visual object categorization and document classification. Experiments on large classification tasks show substantial improvements in performance compared to a single codebook or codebooks learned in a bagging style."
]
} |
1512.04605 | 1930638319 | One of the prevalent learning tasks involving images is content-based image classification. This is a difficult task especially because the low-level features used to digitally describe images usually capture little information about the semantics of the images. In this paper, we tackle this difficulty by enriching the semantic content of the image representation by using external knowledge. The underlying hypothesis of our work is that creating a more semantically rich representation for images would yield higher machine learning performances, without the need to modify the learning algorithms themselves. The external semantic information is presented under the form of non-positional image labels, therefore positioning our work in a weakly supervised context. Two approaches are proposed: the first one leverages the labels into the visual vocabulary construction algorithm, the result being dedicated visual vocabularies. The second approach adds a filtering phase as a pre-processing of the vocabulary construction. Known positive and known negative sets are constructed and features that are unlikely to be associated with the objects denoted by the labels are filtered. We apply our proposition to the task of content-based image classification and we show that semantically enriching the image representation yields higher classification performances than the baseline representation. | @cite_14 construct both a generic vocabulary and a specific one for each class. The generic vocabulary describes the content of all the considered classes of images, while the specific vocabularies are obtained through the adaptation of the universal vocabulary using class-specific data. Any given image can, afterwards, be described using the generic vocabulary or one of the class-specific vocabularies. A semi-supervised technique @cite_40 , based on Hidden Random Markov Fields, uses local features as Observed Fields and Semantic labels as Hidden Fields and employs WordNet to make correlations. Some works @cite_56 @cite_11 @cite_17 @cite_31 use mutual information between features and class labels in order to learn class-specific vocabularies, by merging or splitting initial visual words quantized by . Another work @cite_39 presents an algorithm used for learning a generic visual vocabulary, while trying to preserve and use the semantic information in the form of a point-wise mutual information vector. It uses the diffusion distance to measure intrinsic geometric relations between features. Other approaches @cite_19 make use of label positioning in the images to distinguish between foreground and background features. They use weights for features, higher for the ones corresponding to objects and lower for the background. | {
"cite_N": [
"@cite_14",
"@cite_11",
"@cite_56",
"@cite_39",
"@cite_19",
"@cite_40",
"@cite_31",
"@cite_17"
],
"mid": [
"1592774159",
"1536716526",
"2151768982",
"2165846633",
"2140169211",
"2164860686",
"2141303268",
"2164996087"
],
"abstract": [
"Several state-of-the-art Generic Visual Categorization (GVC) systems are built around a vocabulary of visual terms and characterize images with one histogram of visual word counts. We propose a novel and practical approach to GVC based on a universal vocabulary, which describes the content of all the considered classes of images, and class vocabularies obtained through the adaptation of the universal vocabulary using class-specific data. An image is characterized by a set of histograms – one per class – where each histogram describes whether the image content is best modeled by the universal vocabulary or the corresponding class vocabulary. It is shown experimentally on three very different databases that this novel representation outperforms those approaches which characterize an image with a single histogram.",
"Recent research in video analysis has shown a promising direction, in which mid-level features (e.g., people, anchor, indoor) are abstracted from low-level features (e.g., color, texture, motion, etc.) and used for discriminative classification of semantic labels. However, in most systems, such mid-level features are selected manually. In this paper, we propose an information-theoretic framework, visual cue cluster construction (VC3), to automatically discover adequate mid-level features. The problem is posed as mutual information maximization, through which optimal cue clusters are discovered to preserve the highest information about the semantic labels. We extend the Information Bottleneck framework to high-dimensional continuous features and further propose a projection method to map each video into probabilistic memberships over all the cue clusters. The biggest advantage of the proposed approach is to remove the dependence on the manual process in choosing the mid-level features and the huge labor cost involved in annotating the training corpus for training the detector of each mid-level feature. The proposed VC3 framework is general and effective, leading to exciting potential in solving other problems of semantic video analysis. When tested in news video story segmentation, the proposed approach achieves promising performance gain over representations derived from conventional clustering techniques and even the mid-level features selected manually.",
"We present an approach to determine the category and location of objects in images. It performs very fast categorization of each pixel in an image, a brute-force approach made feasible by three key developments: First, our method reduces the size of a large generic dictionary (on the order of ten thousand words) to the low hundreds while increasing classification performance compared to k-means. This is achieved by creating a discriminative dictionary tailored to the task by following the information bottleneck principle. Second, we perform feature-based categorization efficiently on a dense grid by extending the concept of integral images to the computation of local histograms. Third, we compute SIFT descriptors densely in linear time. We compare our method to the state of the art and find that it excels in accuracy and simplicity, performing better while assuming less.",
"In this paper, we propose a novel approach for learning generic visual vocabulary. We use diffusion maps to automatically learn a semantic visual vocabulary from abundant quantized midlevel features. Each midlevel feature is represented by the vector of pointwise mutual information (PMI). In this midlevel feature space, we believe the features produced by similar sources must lie on a certain manifold. To capture the intrinsic geometric relations between features, we measure their dissimilarity using diffusion distance. The underlying idea is to embed the midlevel features into a semantic lower-dimensional space. Our goal is to construct a compact yet discriminative semantic visual vocabulary. Although the conventional approach using k-means is good for vocabulary construction, its performance is sensitive to the size of the visual vocabulary. In addition, the learnt visual words are not semantically meaningful since the clustering criterion is based on appearance similarity only. Our proposed approach can effectively overcome these problems by capturing the semantic and geometric relations of the feature space using diffusion maps. Unlike some of the supervised vocabulary construction approaches, and the unsupervised methods such as pLSA and LDA, diffusion maps can capture the local intrinsic geometric relations between the midlevel feature points on the manifold. We have tested our approach on the KTH action dataset, our own YouTube action dataset and the fifteen scene dataset, and have obtained very promising results.",
"This paper presents an extension to category classification with bag-of-features, which represents an image as an orderless distribution of features. We propose a method to exploit spatial relations between features by utilizing object boundaries provided during supervised training. We boost the weights of features that agree on the position and shape of the object and suppress the weights of background features, hence the name of our method - \"spatial weighting\". The proposed representation is thus richer and more robust to background clutter. Experimental results show that our approach improves the results of one of the best current image classification techniques. Furthermore, we propose to apply the spatial model to object localization. Initial results are promising.",
"Visual vocabulary serves as a fundamental component in many computer vision tasks, such as object recognition, visual search, and scene modeling. While state-of-the-art approaches build visual vocabulary based solely on visual statistics of local image patches, the correlative image labels are left unexploited in generating visual words. In this work, we present a semantic embedding framework to integrate semantic information from Flickr labels for supervised vocabulary construction. Our main contribution is a Hidden Markov Random Field modeling to supervise feature space quantization, with specialized considerations to label correlations: Local visual features are modeled as an Observed Field, which follows visual metrics to partition feature space. Semantic labels are modeled as a Hidden Field, which imposes generative supervision to the Observed Field with WordNet-based correlation constraints as Gibbs distribution. By simplifying the Markov property in the Hidden Field, both unsupervised and supervised (label independent) vocabularies can be derived from our framework. We validate our performances in two challenging computer vision tasks with comparisons to state-of-the-arts: (1) Large-scale image search on a Flickr 60,000 database; (2) Object recognition on the PASCAL VOC database.",
"This paper presents a new algorithm for the automatic recognition of object classes from images (categorization). Compact and yet discriminative appearance-based object class models are automatically learned from a set of training images. The method is simple and extremely fast, making it suitable for many applications such as semantic image retrieval, Web search, and interactive image editing. It classifies a region according to the proportions of different visual words (clusters in feature space). The specific visual words and the typical proportions in each object are learned from a segmented training set. The main contribution of this paper is twofold: i) an optimally compact visual dictionary is learned by pair-wise merging of visual words from an initially large dictionary. The final visual words are described by GMMs. ii) A novel statistical measure of discrimination is proposed which is optimized by each merge operation. High classification accuracy is demonstrated for nine object classes on photographs of real objects viewed under general lighting conditions, poses and viewpoints. The set of test images used for validation comprise: i) photographs acquired by us, ii) images from the Web and iii) images from the recently released Pascal dataset. The proposed algorithm performs well on both texture-rich objects (e.g. grass, sky, trees) and structure-rich ones (e.g. cars, bikes, planes)",
"This paper proposes a technique for jointly quantizing continuous features and the posterior distributions of their class labels based on minimizing empirical information loss such that the quantizer index of a given feature vector approximates a sufficient statistic for its class label. Informally, the quantized representation retains as much information as possible for classifying the feature vector correctly. We derive an alternating minimization procedure for simultaneously learning codebooks in the Euclidean feature space and in the simplex of posterior class distributions. The resulting quantizer can be used to encode unlabeled points outside the training set and to predict their posterior class distributions, and has an elegant interpretation in terms of lossless source coding. The proposed method is validated on synthetic and real data sets and is applied to two diverse problems: learning discriminative visual vocabularies for bag-of-features image classification and image segmentation."
]
} |
1512.04650 | 2952360713 | The attentional mechanism has proven to be effective in improving end-to-end neural machine translation. However, due to the intricate structural divergence between natural languages, unidirectional attention-based models might only capture partial aspects of attentional regularities. We propose agreement-based joint training for bidirectional attention-based end-to-end neural machine translation. Instead of training source-to-target and target-to-source translation models independently,our approach encourages the two complementary models to agree on word alignment matrices on the same training data. Experiments on Chinese-English and English-French translation tasks show that agreement-based joint training significantly improves both alignment and translation quality over independent training. | After analyzing the alignment matrices generated by RNNsearch @cite_7 , we find that modeling the structural divergence of natural languages is so challenging that unidirectional models can only capture part of alignment regularities. This finding inspires us to improve attention-based NMT by combining two unidirectional models. In this work, we only apply agreement-based joint learning to RNNsearch . As our approach does not assume specific network architectures, it is possible to apply it to the models proposed by . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2949335953"
],
"abstract": [
"An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker."
]
} |
1512.04483 | 2202083088 | Matrix factorization (MF) and Autoencoder (AE) are among the most successful approaches of unsupervised learning. While MF based models have been extensively exploited in the graph modeling and link prediction literature, the AE family has not gained much attention. In this paper we investigate both MF and AE's application to the link prediction problem in sparse graphs. We show the connection between AE and MF from the perspective of multiview learning, and further propose MF+AE: a model training MF and AE jointly with shared parameters. We apply dropout to training both the MF and AE parts, and show that it can significantly prevent overfitting by acting as an adaptive regularization. We conduct experiments on six real world sparse graph datasets, and show that MF+AE consistently outperforms the competing methods, especially on datasets that demonstrate strong non-cohesive structures. | The utilization of dropout training as an implicit regularization also contrasts with Bayesian models @cite_7 @cite_28 . While both dropout and Bayesian Inference are designed to reduce overfitting, their approaches are essentially orthogonal to each other. It would be an interesting future work to investigate whether they can be combined to further increase the generalization ability. Dropout has also been applied to training generalized linear models @cite_1 , log linear models with structured output @cite_25 , and distance metric learning @cite_23 . | {
"cite_N": [
"@cite_7",
"@cite_28",
"@cite_1",
"@cite_23",
"@cite_25"
],
"mid": [
"2107107106",
"",
"2952825952",
"2093549852",
"2250968750"
],
"abstract": [
"Consider data consisting of pairwise measurements, such as presence or absence of links between pairs of objects. These data arise, for instance, in the analysis of protein interactions and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing pairwise measurements with probabilistic models requires special assumptions, since the usual independence or exchangeability assumptions no longer hold. Here we introduce a class of variance allocation models for pairwise measurements: mixed membership stochastic blockmodels. These models combine global parameters that instantiate dense patches of connectivity (blockmodel) with local parameters that instantiate node-specific variability in the connections (mixed membership). We develop a general variational inference algorithm for fast approximate posterior inference. We demonstrate the advantages of mixed membership stochastic blockmodels with applications to social networks and protein interaction networks.",
"",
"Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learning algorithm, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset.",
"Distance metric learning (DML) aims to learn a distance metric better than Euclidean distance. It has been successfully applied to various tasks, e.g., classification, clustering and information retrieval. Many DML algorithms suffer from the over-fitting problem because of a large number of parameters to be determined in DML. In this paper, we exploit the dropout technique, which has been successfully applied in deep learning to alleviate the over-fitting problem, for DML. Different from the previous studies that only apply dropout to training data, we apply dropout to both the learned metrics and the training data. We illustrate that application of dropout to DML is essentially equivalent to matrix norm based regularization. Compared with the standard regularization scheme in DML, dropout is advantageous in simulating the structured regularizers which have shown consistently better performance than non structured regularizers. We verify, both empirically and theoretically, that dropout is effective in regulating the learned metric to avoid the over-fitting problem. Last, we examine the idea of wrapping the dropout technique in the state-of-art DML methods and observe that the dropout technique can significantly improve the performance of the original DML methods.",
"NLP models have many and sparse features, and regularization is key for balancing model overfitting versus underfitting. A recently repopularized form of regularization is to generate fake training data by repeatedly adding noise to real data. We reinterpret this noising as an explicit regularizer, and approximate it with a second-order formula that can be used during training without actually generating fake data. We show how to apply this method to structured prediction using multinomial logistic regression and linear-chain CRFs. We tackle the key challenge of developing a dynamic program to compute the gradient of the regularizer efficiently. The regularizer is a sum over inputs, so we can estimate it more accurately via a semi-supervised or transductive extension. Applied to text classification and NER, our method provides a >1 absolute performance gain over use of standardL2 regularization."
]
} |
1512.04483 | 2202083088 | Matrix factorization (MF) and Autoencoder (AE) are among the most successful approaches of unsupervised learning. While MF based models have been extensively exploited in the graph modeling and link prediction literature, the AE family has not gained much attention. In this paper we investigate both MF and AE's application to the link prediction problem in sparse graphs. We show the connection between AE and MF from the perspective of multiview learning, and further propose MF+AE: a model training MF and AE jointly with shared parameters. We apply dropout to training both the MF and AE parts, and show that it can significantly prevent overfitting by acting as an adaptive regularization. We conduct experiments on six real world sparse graph datasets, and show that MF+AE consistently outperforms the competing methods, especially on datasets that demonstrate strong non-cohesive structures. | This work is also related to graph representation learning. Recently, @cite_22 propose to learn node embeddings by predicting the path of a random walk, and they show that the learned representation can boost the performance of the classification task on graph data. It would also be interesting to evaluate the effectiveness of MF+AE in the same setting. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2154851992"
],
"abstract": [
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection."
]
} |
1512.04389 | 2211558711 | This paper is an extended version of our proceedings paper announced at LICS'16; in order to complement it, this version is written from a different viewpoint including topos-theoretic aspect on our work. Technically, this paper introduces and studies the class of semi-galois categories, which extend galois categories and are dual to profinite monoids in the same way as galois categories are dual to profinite groups; the study on this class of categories is aimed at providing an axiomatic reformulation of Eilenberg's theory of varieties of regular languages--- a branch in formal language theory that has been developed since the mid 1960's and particularly concerns systematic classification of regular languages, finite monoids, and deterministic finite automata. In this paper, detailed proofs of our central results announced at LICS'16 are presented, together with topos-theoretic considerations. The main results include (I) a proof of the duality theorem between profinite monoids and semi-galois categories, extending the duality theorem between profinite groups and galois categories; based on this results on semi-galois categories, we then discuss (II) a reinterpretation of Eilenberg's theory from a viewpoint of duality theorem; in relation with this reinterpretation of the theory, (III) we also give a purely topos-theoretic characterization of classifying topoi BM of profinite monoids M among general coherent topoi, which is a topos-theoretic application of (I). This characterization states that a topos E is equivalent to the classifying topos BM of some profinite monoid M if and only if E is (i) coherent, (ii) noetherian, and (iii) has a surjective coherent point. This topos-theoretic consideration is related to the logical and geometric problems concerning Eilenberg's theory that we addressed at LICS'16, which remain open in this paper. | Provided here is a proof of the duality theorem between profinite monoids and semi-galois categories in the full form of a contravariant equivalence between suitable categories. Our proof is given intentionally as a natural extension of an elementary proof of the duality between profinite groups and galois categories. This elementary proof will be valuable in its own right mostly for those who are particularly concerned with profinite monoids and with comparison with the case of profinite groups: The structure of profinite monoids is still mysterious in general, while their analysis plays a fundamental role in Eilenberg's theory @cite_17 . We shall not proceed to generalize this duality theorem itself in this paper, since we instead proceed to another direction in : We consider another dual (coherent topoi) of semi-galois categories and see that they are axiomatized exactly as coherent noetherian topoi with coherent surjective points. This consideration serves yet another (topos-theoretic) viewpoint on the classical Eilenberg variety theory. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2000856728"
],
"abstract": [
"Let J be the pseudovariety of all finite J-trivial semigroups and let ΩnJ denote the topological semigroup of all n-ary implicit operations on J. The semigroup ΩnJ is generated by the n component projections together with the 2n−1 idempotents. Furthermore, ΩnJ is described as a free algebra of type (1,2) in a certain variety and the word problem is solved in this algebra. As a consequence, ΩnJ is countable, which settles a conjecture proposed by I. Simon."
]
} |
1512.04466 | 2951097597 | In this paper, we investigate the usage of autoencoders in modeling textual data. Traditional autoencoders suffer from at least two aspects: scalability with the high dimensionality of vocabulary size and dealing with task-irrelevant words. We address this problem by introducing supervision via the loss function of autoencoders. In particular, we first train a linear classifier on the labeled data, then define a loss for the autoencoder with the weights learned from the linear classifier. To reduce the bias brought by one single classifier, we define a posterior probability distribution on the weights of the classifier, and derive the marginalized loss of the autoencoder with Laplace approximation. We show that our choice of loss function can be rationalized from the perspective of Bregman Divergence, which justifies the soundness of our model. We evaluate the effectiveness of our model on six sentiment analysis datasets, and show that our model significantly outperforms all the competing methods with respect to classification accuracy. We also show that our model is able to take advantage of unlabeled dataset and get improved performance. We further show that our model successfully learns highly discriminative feature maps, which explains its superior performance. | From the perspective of machine learning methodology, our approach resembles the idea of layer-wise pretraining in deep Neural Networks @cite_5 . Our model differs from the traditional training procedure of autoencoders in that we effectively utilize the label information to guide the representation learning. Related idea has been proposed in @cite_9 , where they train Recursive autoencoders on sentences jointly with prediction of sentiment. Due to the delicate recursive architecture, their model only works on sentences with given parsing trees, and could not generalize to documents. MTC @cite_10 is another work that models the interaction of autoencoders and classifiers. However, their training of autoencoders is purely unsupervised, the interaction comes into play by requiring the classifier to be invariant along the tangents of the learned data manifold. It is not difficult to see that the assumption of MTC would not hold when the class labels did not align well with the data manifold, which is a situation our model does not suffer from. | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_10"
],
"mid": [
"2072128103",
"71795751",
""
],
"abstract": [
"Can machine learning deliver AI? Theoretical results, inspiration from the brain and cognition, as well as machine learning experiments suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one would need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers, graphical models with many levels of latent variables, or in complicated propositional formulae re-using many sub-formulae. Each level of the architecture represents features at a different level of abstraction, defined as a composition of lower-level features. Searching the parameter space of deep architectures is a difficult task, but new algorithms have been discovered and a new sub-area has emerged in the machine learning community since 2006, following these discoveries. Learning algorithms such as those for Deep Belief Networks and other related unsupervised learning algorithms have recently been proposed to train deep architectures, yielding exciting results and beating the state-of-the-art in certain areas. Learning Deep Architectures for AI discusses the motivations for and principles of learning algorithms for deep architectures. By analyzing and comparing recent results with different learning algorithms for deep architectures, explanations for their success are proposed and discussed, highlighting challenges and suggesting avenues for future explorations in this area.",
"We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model's ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.",
""
]
} |
1512.04476 | 2950919774 | Several projects have shown the feasibility to use textual social media data to track public health concerns, such as temporal influenza patterns or geographical obesity patterns. In this paper, we look at whether geo-tagged images from Instagram also provide a viable data source. Especially for "lifestyle" diseases, such as obesity, drinking or smoking, images of social gatherings could provide information that is not necessarily shared in, say, tweets. In this study, we explore whether (i) tags provided by the users and (ii) annotations obtained via automatic image tagging are indeed valuable for studying public health. We find that both user-provided and machine-generated tags provide information that can be used to infer a county's health statistics. Whereas for most statistics user-provided tags are better features, for predicting excessive drinking machine-generated tags such as "liquid" and "glass" yield better models. This hints at the potential of using machine-generated tags to study substance abuse. | Recent studies have shown that a large scale, real time, non-intrusive monitoring can be done using social media to get aggregate statistics about the health and well being of a population @cite_4 @cite_15 @cite_26 . Twitter in particular has been widely used in studies on public health @cite_25 @cite_12 @cite_13 @cite_14 , due to its vast amount of data and the ease of availability of data. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_14",
"@cite_15",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"1615870545",
"1969894105",
"",
"2102742655",
"2164912194",
"201361503",
""
],
"abstract": [
"The exponentially increasing stream of real time big data produced by Web 2.0 Internet and mobile networks created radically new interdisciplinary challenges for public health and computer science. Traditional public health disease surveillance systems have to utilize the potential created by new situation-aware realtime signals from social media, mobile sensor networks and citizens? participatory surveillance systems providing invaluable free realtime event-based signals for epidemic intelligence. However, rather than improving existing isolated systems, an integrated solution bringing together existing epidemic intelligence systems scanning news media (e.g., GPHIN, MedISys) with real-time social media intelligence (e.g., Twitter, participatory systems) is required to substantially improve and automate early warning, outbreak detection and preparedness operations. However, automatic monitoring and novel verification methods for these multichannel event-based real time signals has to be integrated with traditional case-based surveillance systems from microbiological laboratories and clinical reporting. Finally, the system needs effectively support coordination of epidemiological teams, risk communication with citizens and implementation of prevention measures. However, from computational perspective, signal detection, analysis and verification of very high noise realtime big data provide a number of interdisciplinary challenges for computer science. Novel approaches integrating current systems into a digital public health dashboard can enhance signal verification methods and automate the processes assisting public health experts in providing better informed and more timely response. In this paper, we describe the roadmap to such a system, components of an integrated public health surveillance services and computing challenges to be resolved to create an integrated real world solution.",
"Recent work in machine learning and natural language processing has studied the health content of tweets and demonstrated the potential for extracting useful public health information from their aggregation. This article examines the types of health topics discussed on Twitter, and how tweets can both augment existing public health capabilities and enable new ones. The author also discusses key challenges that researchers must address to deliver high-quality tools to the public health community.",
"",
"We present a review of pharmacovigilance techniques from social media (SM) data.Our review discusses twenty-two studies, comparing them across various axes.We present a possible pathway for automated pharmacovigilance research from SM. ObjectiveAutomatic monitoring of Adverse Drug Reactions (ADRs), defined as adverse patient outcomes caused by medications, is a challenging research problem that is currently receiving significant attention from the medical informatics community. In recent years, user-posted data on social media, primarily due to its sheer volume, has become a useful resource for ADR monitoring. Research using social media data has progressed using various data sources and techniques, making it difficult to compare distinct systems and their performances. In this paper, we perform a methodical review to characterize the different approaches to ADR detection extraction from social media, and their applicability to pharmacovigilance. In addition, we present a potential systematic pathway to ADR monitoring from social media. MethodsWe identified studies describing approaches for ADR detection from social media from the Medline, Embase, Scopus and Web of Science databases, and the Google Scholar search engine. Studies that met our inclusion criteria were those that attempted to extract ADR information posted by users on any publicly available social media platform. We categorized the studies according to different characteristics such as primary ADR detection approach, size of corpus, data source(s), availability, and evaluation criteria. ResultsTwenty-two studies met our inclusion criteria, with fifteen (68 ) published within the last two years. However, publicly available annotated data is still scarce, and we found only six studies that made the annotations used publicly available, making system performance comparisons difficult. In terms of algorithms, supervised classification techniques to detect posts containing ADR mentions, and lexicon-based approaches for extraction of ADR mentions from texts have been the most popular. ConclusionOur review suggests that interest in the utilization of the vast amounts of available social media data for ADR monitoring is increasing. In terms of sources, both health-related and general social media data have been used for ADR detection-while health-related sources tend to contain higher proportions of relevant data, the volume of data from general social media websites is significantly higher. There is still very limited amount of annotated data publicly available , and, as indicated by the promising results obtained by recent supervised learning approaches, there is a strong need to make such data available to the research community.",
"Traditional public health surveillance requires regular clinical reports and considerable effort by health professionals to analyze data. Therefore, a low cost alternative is of great practical use. As a platform used by over 500 million users worldwide to publish their ideas about many topics, including health conditions, Twitter provides researchers the freshest source of public health conditions on a global scale. We propose a framework for tracking public health condition trends via Twitter. The basic idea is to use frequent term sets from highly purified health-related tweets as queries into a Wikipedia article index -- treating the retrieval of medically-related articles as an indicator of a health-related condition. By observing fluctuations in frequent term sets and in turn medically-related articles over a series of time slices of tweets, we detect shifts in public health conditions and concerns over time. Compared to existing approaches, our framework provides a general a priori identification of emerging public health conditions rather than a specific illness (e.g., influenza) as is commonly done.",
"Analyzing user messages in social media can measure different population characteristics, including public health measures. For example, recent work has correlated Twitter messages with influenza rates in the United States; but this has largely been the extent of mining Twitter for public health. In this work, we consider a broader range of public health applications for Twitter. We apply the recently introduced Ailment Topic Aspect Model to over one and a half million health related tweets and discover mentions of over a dozen ailments, including allergies, obesity and insomnia. We introduce extensions to incorporate prior knowledge into this model and apply it to several tasks: tracking illnesses over times (syndromic surveillance), measuring behavioral risk factors, localizing illnesses by geographic region, and analyzing symptoms and medication usage. We show quantitative correlations with public health data and qualitative evaluations of model output. Our results suggest that Twitter has broad applicability for public health research.",
""
]
} |
1512.04476 | 2950919774 | Several projects have shown the feasibility to use textual social media data to track public health concerns, such as temporal influenza patterns or geographical obesity patterns. In this paper, we look at whether geo-tagged images from Instagram also provide a viable data source. Especially for "lifestyle" diseases, such as obesity, drinking or smoking, images of social gatherings could provide information that is not necessarily shared in, say, tweets. In this study, we explore whether (i) tags provided by the users and (ii) annotations obtained via automatic image tagging are indeed valuable for studying public health. We find that both user-provided and machine-generated tags provide information that can be used to infer a county's health statistics. Whereas for most statistics user-provided tags are better features, for predicting excessive drinking machine-generated tags such as "liquid" and "glass" yield better models. This hints at the potential of using machine-generated tags to study substance abuse. | Culotta @cite_21 and @cite_9 used Twitter in conjunction with psychometric lexicons such as LIWC and PERMA to predict county-level health statistics such as obesity, teen pregnancy and diabetes. @cite_6 make use of Twitter data to identify health related topics and use these to characterize the discussion of health online. @cite_11 use Foursquare and Instagram images to study food consumption patterns in the US, and find a correlation between obesity and fast food restaurants. | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_11"
],
"mid": [
"2001488574",
"2104925568",
"2079591709",
"1974289028"
],
"abstract": [
"Food is an integral part of our lives, cultures, and well-being, and is of major interest to public health. The collection of daily nutritional data involves keeping detailed diaries or periodic surveys and is limited in scope and reach. Alternatively, social media is infamous for allowing its users to update the world on the minutiae of their daily lives, including their eating habits. In this work we examine the potential of Twitter to provide insight into US-wide dietary choices by linking the tweeted dining experiences of 210K users to their interests, demographics, and social networks. We validate our approach by relating the caloric values of the foods mentioned in the tweets to the state-wide obesity rates, achieving a Pearson correlation of 0.77 across the 50 US states and the District of Columbia. We then build a model to predict county-wide obesity and diabetes statistics based on a combination of demographic variables and food names mentioned on Twitter. Our results show significant improvement over previous CHI research (Culotta 2014). We further link this data to societ al and economic factors, such as education and income, illustrating that areas with higher education levels tweet about food that is significantly less caloric. Finally, we address the somewhat controversial issue of the social nature of obesity (Christakis & Fowler 2007) by inducing two social networks using mentions and reciprocal following relationships.",
"Understanding the relationships among environment, behavior, and health is a core concern of public health researchers. While a number of recent studies have investigated the use of social media to track infectious diseases such as influenza, little work has been done to determine if other health concerns can be inferred. In this paper, we present a large-scale study of 27 health-related statistics, including obesity, health insurance coverage, access to healthy foods, and teen birth rates. We perform a linguistic analysis of the Twitter activity in the top 100 most populous counties in the U.S., and find a significant correlation with 6 of the 27 health statistics. When compared to traditional models based on demographic variables alone, we find that augmenting models with Twitter-derived information improves predictive accuracy for 20 of 27 statistics, suggesting that this new methodology can complement existing approaches.",
"By aggregating self-reported health statuses across millions of users, we seek to characterize the variety of health information discussed in Twitter. We describe a topic modeling framework for discovering health topics in Twitter, a social media website. This is an exploratory approach with the goal of understanding what health topics are commonly discussed in social media. This paper describes in detail a statistical topic model created for this purpose, the Ailment Topic Aspect Model (ATAM), as well as our system for filtering general Twitter data based on health keywords and supervised classification. We show how ATAM and other topic models can automatically infer health topics in 144 million Twitter messages from 2011 to 2013. ATAM discovered 13 coherent clusters of Twitter messages, some of which correlate with seasonal influenza (r = 0.689) and allergies (r = 0.810) temporal surveillance data, as well as exercise (r = .534) and obesity (r = −.631) related geographic survey data in the United States. These results demonstrate that it is possible to automatically discover topics that attain statistically significant correlations with ground truth data, despite using minimal human supervision and no historical data to train the model, in contrast to prior work. Additionally, these results demonstrate that a single general-purpose model can identify many different health topics in social media.",
"We present a large-scale analysis of Instagram pictures taken at 164,753 restaurants by millions of users. Motivated by the obesity epidemic in the United States, our aim is three-fold: (i) to assess the relationship between fast food and chain restaurants and obesity, (ii) to better understand people's thoughts on and perceptions of their daily dining experiences, and (iii) to reveal the nature of social reinforcement and approval in the context of dietary health on social media. When we correlate the prominence of fast food restaurants in US counties with obesity, we find the Foursquare data to show a greater correlation at 0.424 than official survey data from the County Health Rankings would show. Our analysis further reveals a relationship between small businesses and local foods with better dietary health, with such restaurants getting more attention in areas of lower obesity. However, even in such areas, social approval favors the unhealthy foods high in sugar, with donut shops producing the most liked photos. Thus, the dietary landscape our study reveals is a complex ecosystem, with fast food playing a role alongside social interactions and personal perceptions, which often may be at odds."
]
} |
1512.04476 | 2950919774 | Several projects have shown the feasibility to use textual social media data to track public health concerns, such as temporal influenza patterns or geographical obesity patterns. In this paper, we look at whether geo-tagged images from Instagram also provide a viable data source. Especially for "lifestyle" diseases, such as obesity, drinking or smoking, images of social gatherings could provide information that is not necessarily shared in, say, tweets. In this study, we explore whether (i) tags provided by the users and (ii) annotations obtained via automatic image tagging are indeed valuable for studying public health. We find that both user-provided and machine-generated tags provide information that can be used to infer a county's health statistics. Whereas for most statistics user-provided tags are better features, for predicting excessive drinking machine-generated tags such as "liquid" and "glass" yield better models. This hints at the potential of using machine-generated tags to study substance abuse. | Social media has also enabled large-scale studies linking lifestyle and health data at an @math level. For example, @cite_5 , build a classifier to identify the health of a user based on their Twitter usage. Many more hidden'', yet important conditions such as depression @cite_7 @cite_0 @cite_23 , sleep problems @cite_17 , eating disorders @cite_2 , and substance use @cite_18 have been studied using social media data. Our study is different from the ones discussed above in that we propose the use of image data to study public health. Abdullah, et al @cite_3 use smile recognition from images posted on social media to study and quantify the overall societ al happiness. @cite_23 study depression related images on Instagram and establish[ed] the importance of visual imagery as a vehicle for expressing aspects of depression". In our work, we study if the use of image recognition techniques helps in understanding a broader range of health related issues. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_3",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_17"
],
"mid": [
"2091816755",
"2402700",
"2041003138",
"1998970095",
"1998830562",
"1567208000",
"2012523161",
"1596538443"
],
"abstract": [
"Adolescents are developmentally sensitive to pathways that influence alcohol and other drug (AOD) use. In the absence of guidance, their routine engagement with social media may add a further layer of risk. There are several potential mechanisms for social media use to influence AOD risk, including exposure to peer portrayals of AOD use, socially amplified advertising, misinformation, and predatory marketing against a backdrop of lax regulatory systems and privacy controls. Here the authors summarize the influences of the social media world and suggest how pediatricians in everyday practice can alert youth and their parents to these risks to foster conversation, awareness, and harm reduction.",
"Major depression constitutes a serious challenge in personal and public health. Tens of millions of people each year suffer from depression and only a fraction receives adequate treatment. We explore the potential to use social media to detect and diagnose major depressive disorder in individuals. We first employ crowdsourcing to compile a set of Twitter users who report being diagnosed with clinical depression, based on a standard psychometric instrument. Through their social media postings over a year preceding the onset of depression, we measure behavioral attributes relating to social engagement, emotion, language and linguistic styles, ego network, and mentions of antidepressant medications. We leverage these behavioral cues, to build a statistical classifier that provides estimates of the risk of depression, before the reported onset. We find that social media contains useful signals for characterizing the onset of depression in individuals, as measured through decrease in social activity, raised negative affect, highly clustered egonetworks, heightened relational and medicinal concerns, and greater expression of religious involvement. We believe our findings and methods may be useful in developing tools for identifying the onset of major depression, for use by healthcare agencies; or on behalf of individuals, enabling those suffering from depression to be more proactive about their mental health.",
"The increasing adoption of social media provides unprecedented opportunities to gain insight into human nature at vastly broader scales. Regarding the study of population-wide sentiment, prior research commonly focuses on text-based analyses and ignores a treasure trove of sentiment-laden content: images. In this paper, we make methodological and computational contributions by introducing the Smile Index as a formalized measure of societ al happiness. Detecting smiles in 9 million geo-located tweets over 16 months, we validate our Smile Index against both text-based techniques and self-reported happiness. We further make observational contributions by applying our metric to explore temporal trends in sentiment, relate public mood to societ al events, and predict economic indicators. Reflecting upon the innate, language-independent aspects of facial expressions, we recommend future improvements and applications to enable robust, global-level analyses. We conclude with implications for researchers studying and facilitating the expression of collective emotion through socio-technical systems.",
"Self-disclosure is an important element facilitating improved psychological wellbeing in individuals with mental illness. As social media is increasingly adopted in health related discourse, we examine how these new platforms might be allowing honest and candid expression of thoughts, experiences and beliefs. Specifically, we seek to detect levels of self-disclosure manifested in posts shared on different mental health forums on Reddit. We develop a classifier for the purpose based on content features. The classifier is able to characterize a Reddit post to be of high, low, or no self-disclosure with 78 accuracy. Applying this classifier to general mental health discourse on Reddit, we find that bulk of such discourse is characterized by high self-disclosure, and that the community responds distinctively to posts that disclose less or more. We conclude with the potential of harnessing our proposed self-disclosure detection algorithm in psychological therapy via social media. We also discuss design considerations for improved community moderation and support in these vulnerable self-disclosing communities.",
"Despite the well-established finding that people share negative emotions less openly than positive ones, a hashtag search for depression-related terms in Instagram yields millions of images. In this study, we examined depression-related images on Instagram along with their accompanying captions. We want to better understand the role of photo sharing in the lives of people who suffer from depression or who frame their experience as such; specifically, whether this practice engages support networks and how social computing systems can be designed to support such interactions. To lay the groundwork for further investigation, we report here on content analysis of depression-related posts.",
"Abstract Purpose Disordered eating behavior—dieting, laxative use, fasting, binge eating—is common in college-aged women (11 –20 ). A documented increase in the number of young women experiencing eating psychopathology has been blamed on the rise of engagement with social media sites such as Facebook. We predicted that college-aged women's Facebook intensity (e.g., the amount of time spent on Facebook, number of Facebook friends, and integration of Facebook into daily life), online physical appearance comparison (i.e., comparing one's appearance to others' on social media), and online \"fat talk\" (i.e., talking negatively about one's body) would be positively associated with their disordered eating behavior. Methods In an online survey, 128 college-aged women (81.3 Caucasian, 6.7 Asian, 9.0 African-American, and 3.0 Other) completed items, which measured their disordered eating, Facebook intensity, online physical appearance comparison, online fat talk, body mass index, depression, anxiety, perfectionism, impulsivity, and self-efficacy. Results In regression analyses, Facebook intensity, online physical appearance comparison, and online fat talk were significantly and uniquely associated with disordered eating and explained a large percentage of the variance in disordered eating (60 ) in conjunction with covariates. However, greater Facebook intensity was associated with decreased disordered eating behavior, whereas both online physical appearance comparison and online fat talk were associated with greater disordered eating. Conclusions College-aged women who endorsed greater Facebook intensity were less likely to struggle with disordered eating when online physical appearance comparison was accounted for statistically. Facebook intensity may carry both risks and benefits for disordered eating.",
"Research in computational epidemiology to date has concentrated on estimating summary statistics of populations and simulated scenarios of disease outbreaks. Detailed studies have been limited to small domains, as scaling the methods involved poses considerable challenges. By contrast, we model the associations of a large collection of social and environmental factors with the health of particular individuals. Instead of relying on surveys, we apply scalable machine learning techniques to noisy data mined from online social media and infer the health state of any given person in an automated way. We show that the learned patterns can be subsequently leveraged in descriptive as well as predictive fine-grained models of human health. Using a unified statistical model, we quantify the impact of social status, exposure to pollution, interpersonal interactions, and other important lifestyle factors on one's health. Our model explains more than 54 of the variance in people's health (as estimated from their online communication), and predicts the future health status of individuals with 91 accuracy. Our methods complement traditional studies in life sciences, as they enable us to perform large-scale and timely measurement, inference, and prediction of previously elusive factors that affect our everyday lives.",
"Background: Sleep issues such as insomnia affect over 50 million Americans and can lead to serious health problems, including depression and obesity, and can increase risk of injury. Social media platforms such as Twitter offer exciting potential for their use in studying and identifying both diseases and social phenomenon. Objective: Our aim was to determine whether social media can be used as a method to conduct research focusing on sleep issues. Methods: Twitter posts were collected and curated to determine whether a user exhibited signs of sleep issues based on the presence of several keywords in tweets such as insomnia, “can’t sleep”, Ambien, and others. Users whose tweets contain any of the keywords were designated as having self-identified sleep issues (sleep group). Users who did not have self-identified sleep issues (non-sleep group) were selected from tweets that did not contain pre-defined words or phrases used as a proxy for sleep issues. Results: User data such as number of tweets, friends, followers, and location were collected, as well as the time and date of tweets. Additionally, the sentiment of each tweet and average sentiment of each user were determined to investigate differences between non-sleep and sleep groups. It was found that sleep group users were significantly less active on Twitter ( P =.04), had fewer friends ( P <.001), and fewer followers ( P <.001) compared to others, after adjusting for the length of time each user's account has been active. Sleep group users were more active during typical sleeping hours than others, which may suggest they were having difficulty sleeping. Sleep group users also had significantly lower sentiment in their tweets ( P <.001), indicating a possible relationship between sleep and pyschosocial issues. Conclusions: We have demonstrated a novel method for studying sleep issues that allows for fast, cost-effective, and customizable data to be gathered. [J Med Internet Res 2015;17(6):e140]"
]
} |
1512.04476 | 2950919774 | Several projects have shown the feasibility to use textual social media data to track public health concerns, such as temporal influenza patterns or geographical obesity patterns. In this paper, we look at whether geo-tagged images from Instagram also provide a viable data source. Especially for "lifestyle" diseases, such as obesity, drinking or smoking, images of social gatherings could provide information that is not necessarily shared in, say, tweets. In this study, we explore whether (i) tags provided by the users and (ii) annotations obtained via automatic image tagging are indeed valuable for studying public health. We find that both user-provided and machine-generated tags provide information that can be used to infer a county's health statistics. Whereas for most statistics user-provided tags are better features, for predicting excessive drinking machine-generated tags such as "liquid" and "glass" yield better models. This hints at the potential of using machine-generated tags to study substance abuse. | Almost all the methods above rely on textual content though images and other rich multimedia form a major chunk of content being generated and shared in social media. Automatic image annotation has greatly improved over the last couple of years, owing to the development in deep learning @cite_20 . Object recognition @cite_24 and image tagging @cite_8 have become possible because of these new developments, e.g. @cite_8 use deep learning to produce descriptions of images, which compete with (and sometimes beat) human generated labels. A few studies already make use of these advances to identify @cite_10 and study @cite_22 food consumption from pictures. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_24",
"@cite_10",
"@cite_20"
],
"mid": [
"2168154353",
"2951805548",
"1563686443",
"2087006489",
"2136922672"
],
"abstract": [
"Estimating the nutritional value of food based on image recognition is important to health support services employing mobile devices. The estimation accuracy can be improved by recognizing regions of food objects and ingredients contained in those regions. In this paper, we propose a method that estimates nutritional information based on segmentation and labeling of food regions of an image by adopting a semantic segmentation method, in which we consider recipes as corresponding sets of food images and ingredient labels. Any food object or ingredient in a test food image can be annotated as long as the ingredient is contained in a training food image, even if the menu containing the food image appears for the first time. Experimental results show that better estimation is achieved through regression analysis using ingredient labels associated with the segmented regions than when using the local feature of pixels as the predictor variable.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
"We present a state-of-the-art image recognition system, Deep Image, developed using end-to-end deep learning. The key components are a custom-built supercomputer dedicated to deep learning, a highly optimized parallel algorithm using new strategies for data partitioning and communication, larger deep neural network models, novel data augmentation approaches, and usage of multi-scale high-resolution images. Our method achieves excellent results on multiple challenging computer vision benchmarks.",
"We propose a mobile food recognition system, FoodCam, the purposes of which are estimating calorie and nutrition of foods and recording a user's eating habits. In this paper, we propose image recognition methods which are suitable for mobile devices. The proposed method enables real-time food image recognition on a consumer smartphone. This characteristic is completely different from the existing systems which require to send images to an image recognition server. To recognize food items, a user draws bounding boxes by touching the screen first, and then the system starts food item recognition within the indicated bounding boxes. To recognize them more accurately, we segment each food item region by GrubCut, extract image features and finally classify it into one of the one hundred food categories with a linear SVM. As image features, we adopt two kinds of features: one is the combination of the standard bag-of-features and color histograms with ?2 kernel feature maps, and the other is a HOG patch descriptor and a color patch descriptor with the state-of-the-art Fisher Vector representation. In addition, the system estimates the direction of food regions where the higher SVM output score is expected to be obtained, and it shows the estimated direction in an arrow on the screen in order to ask a user to move a smartphone camera. This recognition process is performed repeatedly and continuously. We implemented this system as a standalone mobile application for Android smartphones so as to use multiple CPU cores effectively for real-time recognition. In the experiments, we have achieved the 79.2 classification rate for the top 5 category candidates for a 100-category food dataset with the ground-truth bounding boxes when we used HOG and color patches with the Fisher Vector coding as image features. In addition, we obtained positive evaluation by a user study compared to the food recording system without object recognition.",
"We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind."
]
} |
1512.04138 | 2289648688 | We show the first dimension-preserving search-to-decision reductions for approximate SVP and CVP. In particular, for any @math , we obtain an efficient dimension-preserving reduction from @math -SVP to @math -GapSVP and an efficient dimension-preserving reduction from @math -CVP to @math -GapCVP. These results generalize the known equivalences of the search and decision versions of these problems in the exact case when @math . For SVP, we actually obtain something slightly stronger than a search-to-decision reduction---we reduce @math -SVP to @math -unique SVP, a potentially easier problem than @math -GapSVP. | Some efficient dimension-preserving search-to-decision reductions were known for other lattice problems prior to this work. For example, Regev showed such a reduction for Learning with Errors, an important average-case lattice problem with widespread applications in cryptography @cite_44 . (Both the search and decision versions of LWE are average-case problems.) And, Liu, Lyubashevsky, and Micciancio implicitly use a search-to-decision reduction for Bounded Distance Decoding in their work @cite_42 . Finally, Aggarwal and Dubey showed how to use some of the ideas from @cite_37 to obtain a search-to-decision reduction for unique SVP @cite_17 . While all of these works are quite interesting, they are concerned with promise problems, and not the two most important and natural lattice problems, SVP and CVP. | {
"cite_N": [
"@cite_44",
"@cite_37",
"@cite_42",
"@cite_17"
],
"mid": [
"2007466965",
"1490468194",
"",
"2949988970"
],
"abstract": [
"Our main result is a reduction from worst-case lattice problems such as GapSVP and SIVP to a certain learning problem. This learning problem is a natural extension of the “learning from parity with error” problem to higher moduli. It can also be viewed as the problem of decoding from a random linear code. This, we believe, gives a strong indication that these problems are hard. Our reduction, however, is quantum. Hence, an efficient solution to the learning problem implies a quantum algorithm for GapSVP and SIVP. A main open question is whether this reduction can be made classical (i.e., nonquantum). We also present a (classical) public-key cryptosystem whose security is based on the hardness of the learning problem. By the main result, its security is also based on the worst-case quantum hardness of GapSVP and SIVP. The new cryptosystem is much more efficient than previous lattice-based cryptosystems: the public key is of size O(n2) and encrypting a message increases its size by a factor of O(n) (in previous cryptosystems these values are O(n4) and O(n2), respectively). In fact, under the assumption that all parties share a random bit string of length O(n2), the size of the public key can be reduced to O(n).",
"We prove the equivalence, up to a small polynomial approximation factor @math , of the lattice problems uSVP (unique Shortest Vector Problem), BDD (Bounded Distance Decoding) and GapSVP (the decision version of the Shortest Vector Problem). This resolves a long-standing open problem about the relationship between uSVP and the more standard GapSVP, as well the BDD problem commonly used in coding theory. The main cryptographic application of our work is the proof that the Ajtai-Dwork ([2]) and the Regev ([33]) cryptosystems, which were previously only known to be based on the hardness of uSVP, can be equivalently based on the hardness of worst-case GapSVP @math and GapSVP @math , respectively. Also, in the case of uSVP and BDD, our connection is very tight, establishing the equivalence (within a small constant approximation factor) between the two most central problems used in lattice based public key cryptography and coding theory.",
"",
"We give several improvements on the known hardness of the unique shortest vector problem. - We give a deterministic reduction from the shortest vector problem to the unique shortest vector problem. As a byproduct, we get deterministic NP-hardness for unique shortest vector problem in the @math norm. - We give a randomized reduction from SAT to uSVP_ 1+1 poly(n) . This shows that uSVP_ 1+1 poly(n) is NP-hard under randomized reductions. - We show that if GapSVP_ coNP (or coAM) then uSVP_ coNP (coAM respectively). This simplifies previously known uSVP_ n^ 1 4 coAM proof by Cai Cai98 to uSVP_ (n n)^ 1 4 coAM, and additionally generalizes it to uSVP_ n^ 1 4 coNP. - We give a deterministic reduction from search-uSVP_ to the decision-uSVP_ 2 . We also show that the decision-uSVP is NP -hard for randomized reductions, which does not follow from Kumar-Sivakumar KS01 ."
]
} |
1512.04138 | 2289648688 | We show the first dimension-preserving search-to-decision reductions for approximate SVP and CVP. In particular, for any @math , we obtain an efficient dimension-preserving reduction from @math -SVP to @math -GapSVP and an efficient dimension-preserving reduction from @math -CVP to @math -GapCVP. These results generalize the known equivalences of the search and decision versions of these problems in the exact case when @math . For SVP, we actually obtain something slightly stronger than a search-to-decision reduction---we reduce @math -SVP to @math -unique SVP, a potentially easier problem than @math -GapSVP. | We rely heavily on Lyubashevsky and Micciancio's dimension-preserving reduction from @math -unique SVP to @math -GapSVP @cite_37 . Their result is necessary to prove Theorem , and our deterministic bit-by-bit SVP reduction is very similar to Lyubashevsky and Micciancio's reduction. The main difference between our deterministic SVP reduction and that of Lyubashevsky and Micciancio is that @cite_37 work only with lattices that satisfy the promise of @math -unique SVP. They show that this promise is enough to guarantee that the @math -GapSVP oracle essentially behaves as an GapSVP oracle. In contrast, our reduction works over general lattices, so we have to worry about accumulating error. (We also use a different method to reduce the dimension of the lattice. ) | {
"cite_N": [
"@cite_37"
],
"mid": [
"1490468194"
],
"abstract": [
"We prove the equivalence, up to a small polynomial approximation factor @math , of the lattice problems uSVP (unique Shortest Vector Problem), BDD (Bounded Distance Decoding) and GapSVP (the decision version of the Shortest Vector Problem). This resolves a long-standing open problem about the relationship between uSVP and the more standard GapSVP, as well the BDD problem commonly used in coding theory. The main cryptographic application of our work is the proof that the Ajtai-Dwork ([2]) and the Regev ([33]) cryptosystems, which were previously only known to be based on the hardness of uSVP, can be equivalently based on the hardness of worst-case GapSVP @math and GapSVP @math , respectively. Also, in the case of uSVP and BDD, our connection is very tight, establishing the equivalence (within a small constant approximation factor) between the two most central problems used in lattice based public key cryptography and coding theory."
]
} |
1512.03880 | 2205115547 | Recent years have witnessed amazing outcomes from "Big Models" trained by "Big Data". Most popular algorithms for model training are iterative. Due to the surging volumes of data, we can usually afford to process only a fraction of the training data in each iteration. Typically, the data are either uniformly sampled or sequentially accessed. In this paper, we study how the data access pattern can affect model training. We propose an Active Sampler algorithm, where training data with more "learning value" to the model are sampled more frequently. The goal is to focus training effort on valuable instances near the classification boundaries, rather than evident cases, noisy data or outliers. We show the correctness and optimality of Active Sampler in theory, and then develop a light-weight vectorized implementation. Active Sampler is orthogonal to most approaches optimizing the efficiency of large-scale data analytics, and can be applied to most analytics models trained by stochastic gradient descent (SGD) algorithm. Extensive experimental evaluations demonstrate that Active Sampler can speed up the training procedure of SVM, feature selection and deep learning, for comparable training quality by 1.6-2.2x. | Complex machine learning models, such as large-scale linear methods @cite_2 , feature selection @cite_1 or deep learning @cite_12 , are widely adopted in Big Data analytics. Due to the huge size of both model and data, how to train these model efficiently is a challenging topic, and the solution requires efforts from learning, database, and system communities. Many optimizations have been proposed from a systems perspective for specific classes of models @cite_30 @cite_28 @cite_0 @cite_5 @cite_33 @cite_16 . Most of these algorithms (and many others) can fit into an Empirical Risk Minimization @cite_36 (ERM) framework, for which we aim to develop a more general accelerator. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_28",
"@cite_36",
"@cite_1",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_12"
],
"mid": [
"2099102906",
"",
"2081210343",
"",
"2117699623",
"2032775418",
"2142623206",
"",
"",
"2168231600"
],
"abstract": [
"There is an arms race in the data management industry to support analytics, in which one critical step is feature selection, the process of selecting a feature set that will be used to build a statistical model. Analytics is one of the biggest topics in data management, and feature selection is widely regarded as the most critical step of analytics; thus, we argue that managing the feature selection process is a pressing data management challenge. We study this challenge by describing a feature-selection language and a supporting prototype system that builds on top of current industrial, R-integration layers. From our interactions with analysts, we learned that feature selection is an interactive, human-in-the-loop process, which means that feature selection workloads are rife with reuse opportunities. Thus, we study how to materialize portions of this computation using not only classical database materialization optimizations but also methods that have not previously been used in database optimization, including structural decomposition methods (like QR factorization) and warmstart. These new methods have no analog in traditional SQL systems, but they may be interesting for array and scientific database applications. On a diverse set of data sets and programs, we find that traditional database-style approaches that ignore these new opportunities are more than two orders of magnitude slower than an optimal plan in this new tradeoff space across multiple R-backends. Furthermore, we show that it is possible to build a simple cost-based optimizer to automatically select a near-optimal execution plan for feature selection.",
"",
"Factor graphs and Gibbs sampling are a popular combination for Bayesian statistical methods that are used to solve diverse problems including insurance risk models, pricing models, and information extraction. Given a fixed sampling method and a fixed amount of time, an implementation of a sampler that achieves a higher throughput of samples will achieve a higher quality than a lower-throughput sampler. We study how (and whether) traditional data processing choices about materialization, page layout, and buffer-replacement policy need to be changed to achieve high-throughput Gibbs sampling for factor graphs that are larger than main memory. We find that both new theoretical and new algorithmic techniques are required to understand the tradeoff space for each choice. On both real and synthetic data, we demonstrate that traditional baseline approaches may achieve two orders of magnitude lower throughput than an optimal approach. For a handful of popular tasks across several storage backends, including HBase and traditional unix files, we show that our simple prototype achieves competitive (and sometimes better) throughput compared to specialized state-of-the-art approaches on factor graphs that are larger than main memory.",
"",
"Logistic Regression is a well-known classification method that has been used widely in many applications of data mining, machine learning, computer vision, and bioinformatics. Sparse logistic regression embeds feature selection in the classification framework using the l1-norm regularization, and is attractive in many applications involving high-dimensional data. In this paper, we propose Lassplore for solving large-scale sparse logistic regression. Specifically, we formulate the problem as the l1-ball constrained smooth convex optimization, and propose to solve the problem using the Nesterov's method, an optimal first-order black-box method for smooth convex optimization. One of the critical issues in the use of the Nesterov's method is the estimation of the step size at each of the optimization iterations. Previous approaches either applies the constant step size which assumes that the Lipschitz gradient is known in advance, or requires a sequence of decreasing step size which leads to slow convergence in practice. In this paper, we propose an adaptive line search scheme which allows to tune the step size adaptively and meanwhile guarantees the optimal convergence rate. Empirical comparisons with several state-of-the-art algorithms demonstrate the efficiency of the proposed Lassplore algorithm for large-scale problems.",
"Enterprise data analytics is a booming area in the data management industry. Many companies are racing to develop toolkits that closely integrate statistical and machine learning techniques with data management systems. Almost all such toolkits assume that the input to a learning algorithm is a single table. However, most relational datasets are not stored as single tables due to normalization. Thus, analysts often perform key-foreign key joins before learning on the join output. This strategy of learning after joins introduces redundancy avoided by normalization, which could lead to poorer end-to-end performance and maintenance overheads due to data duplication. In this work, we take a step towards enabling and optimizing learning over joins for a common class of machine learning techniques called generalized linear models that are solved using gradient descent algorithms in an RDBMS setting. We present alternative approaches to learn over a join that are easy to implement over existing RDBMSs. We introduce a new approach named factorized learning that pushes ML computations through joins and avoids redundancy in both I O and computations. We study the tradeoff space for all our approaches both analytically and empirically. Our results show that factorized learning is often substantially faster than the alternatives, but is not always the fastest, necessitating a cost-based approach. We also discuss extensions of all our approaches to multi-table joins as well as to Hive.",
"We describe and analyze a simple and effective iterative algorithm for solving the optimization problem cast by Support Vector Machines (SVM). Our method alternates between stochastic gradient descent steps and projection steps. We prove that the number of iterations required to obtain a solution of accuracy e is O(1 e). In contrast, previous analyses of stochastic gradient descent methods require Ω (1 e2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1 λ, where λ is the regularization parameter of SVM. For a linear kernel, the total run-time of our method is O (d (λe)), where d is a bound on the number of non-zero features in each example. Since the run-time does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach can seamlessly be adapted to employ non-linear kernels while working solely on the primal objective function. We demonstrate the efficiency and applicability of our approach by conducting experiments on large text classification problems, comparing our solver to existing state-of-the-art SVM solvers. For example, it takes less than 5 seconds for our solver to converge when solving a text classification problem from Reuters Corpus Volume 1 (RCV1) with 800,000 training examples.",
"",
"",
"Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm."
]
} |
1512.03880 | 2205115547 | Recent years have witnessed amazing outcomes from "Big Models" trained by "Big Data". Most popular algorithms for model training are iterative. Due to the surging volumes of data, we can usually afford to process only a fraction of the training data in each iteration. Typically, the data are either uniformly sampled or sequentially accessed. In this paper, we study how the data access pattern can affect model training. We propose an Active Sampler algorithm, where training data with more "learning value" to the model are sampled more frequently. The goal is to focus training effort on valuable instances near the classification boundaries, rather than evident cases, noisy data or outliers. We show the correctness and optimality of Active Sampler in theory, and then develop a light-weight vectorized implementation. Active Sampler is orthogonal to most approaches optimizing the efficiency of large-scale data analytics, and can be applied to most analytics models trained by stochastic gradient descent (SGD) algorithm. Extensive experimental evaluations demonstrate that Active Sampler can speed up the training procedure of SVM, feature selection and deep learning, for comparable training quality by 1.6-2.2x. | Stochastic Gradient Descent @cite_23 (SGD) is one of the most popular stochastic optimization methods. Theoretical results are well studied in @cite_8 . However, @cite_26 has shown that the variance in stochastic gradient is the key factor limiting the convergence rate of SGD. Consequently, many SGD variants such as SAG @cite_9 , SVRG @cite_21 , S3DG @cite_3 , Catalyst @cite_17 have been developed to reduce the variance. The convergence rate of these variants has been greatly improved in both theory and practice in terms of the number of iterations required to reach a certain accuracy. However, the optimization cost of these methods are not negligible, causing the training cost per iteration to increase substantially. | {
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_23",
"@cite_17"
],
"mid": [
"2145832734",
"1992208280",
"1791038712",
"2107438106",
"793269399",
"1994616650",
""
],
"abstract": [
"Stochastic gradient optimization is a class of widely used algorithms for training machine learning models. To optimize an objective, it uses the noisy gradient computed from the random data samples instead of the true gradient computed from the entire dataset. However, when the variance of the noisy gradient is large, the algorithm might spend much time bouncing around, leading to slower convergence and worse performance. In this paper, we develop a general approach of using control variate for variance reduction in stochastic gradient. Data statistics such as low-order moments (pre-computed or estimated online) is used to form the control variate. We demonstrate how to construct the control variate for two practical problems using stochastic gradient optimization. One is convex—the MAP estimation for logistic regression, and the other is non-convex—stochastic variational inference for latent Dirichlet allocation. On both problems, our approach shows faster convergence and better performance than the classical approach.",
"In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the stochastic approximation (SA) and the sample average approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say, linear) structure of the considered problem, while the SA approach is a crude subgradient method, which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convex-concave stochastic saddle point problems and present (in our opinion highly encouraging) results of numerical experiments.",
"We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method's iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from O(1 k^ 1 2 ) to O(1 k) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O(1 k) to a linear convergence rate of the form O(p^k) for p 1. Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.",
"Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.",
"Stochastic gradient descent (SGD) holds as a classical method to build large scale machine learning models over big data. A stochastic gradient is typically calculated from a limited number of samples (known as mini-batch), so it potentially incurs a high variance and causes the estimated parameters bounce around the optimal solution. To improve the stability of stochastic gradient, recent years have witnessed the proposal of several semi-stochastic gradient descent algorithms, which distinguish themselves from standard SGD by incorporating global information into gradient computation. In this paper we contribute a novel stratified semi-stochastic gradient descent (S3GD) algorithm to this nascent research area, accelerating the optimization of a large family of composite convex functions. Though theoretically converging faster, prior semi-stochastic algorithms are found to suffer from high iteration complexity, which makes them even slower than SGD in practice on many datasets. In our proposed S3GD, the semi-stochastic gradient is calculated based on efficient manifold propagation, which can be numerically accomplished by sparse matrix multiplications. This way S3GD is able to generate a highly-accurate estimate of the exact gradient from each mini-batch with largely-reduced computational complexity. Theoretic analysis reveals that the proposed S3GD elegantly balances the geometric algorithmic convergence rate against the space and time complexities during the optimization. The efficacy of S3GD is also experimentally corroborated on several large-scale benchmark datasets.",
"Let M(x) denote the expected value at level x of the response to a certain experiment. M(x) is assumed to be a monotone function of x but is unknown to the experimenter, and it is desired to find the solution x = θ of the equation M(x) = α, where a is a given constant. We give a method for making successive experiments at levels x1, x2, ··· in such a way that xn will tend to θ in probability.",
""
]
} |
1512.03880 | 2205115547 | Recent years have witnessed amazing outcomes from "Big Models" trained by "Big Data". Most popular algorithms for model training are iterative. Due to the surging volumes of data, we can usually afford to process only a fraction of the training data in each iteration. Typically, the data are either uniformly sampled or sequentially accessed. In this paper, we study how the data access pattern can affect model training. We propose an Active Sampler algorithm, where training data with more "learning value" to the model are sampled more frequently. The goal is to focus training effort on valuable instances near the classification boundaries, rather than evident cases, noisy data or outliers. We show the correctness and optimality of Active Sampler in theory, and then develop a light-weight vectorized implementation. Active Sampler is orthogonal to most approaches optimizing the efficiency of large-scale data analytics, and can be applied to most analytics models trained by stochastic gradient descent (SGD) algorithm. Extensive experimental evaluations demonstrate that Active Sampler can speed up the training procedure of SVM, feature selection and deep learning, for comparable training quality by 1.6-2.2x. | There are also studies @cite_11 on the effect of learning rate on the convergence rate of SGD. Naturally, reducing the multiplier of gradient in updates will reduce the variance in each update. This idea motivates us to study if we can scale down those stochastic gradients with larger variance by using a smaller learning rate, while making up the effects of those gradients by increasing their sampling frequency. Based on this intuition, we propose to accelerate the SGD training based on the idea of active learning @cite_6 @cite_25 . Active learning was originally proposed to select a set of labeled training data to maximize the accuracy of model. @cite_10 uses the idea of weighted sampling to maximize the information gain of active learning. However, in our Active Sampler, all training data are already labeled, and the active selection is to maximize the learning speed of a passive learning model. | {
"cite_N": [
"@cite_10",
"@cite_25",
"@cite_6",
"@cite_11"
],
"mid": [
"2130492741",
"2134512579",
"2903158431",
"2146502635"
],
"abstract": [
"Learning to rank arises in many information retrieval applications, ranging from Web search engine, online advertising to recommendation system. In learning to rank, the performance of a ranking model is strongly affected by the number of labeled examples in the training set; on the other hand, obtaining labeled examples for training data is very expensive and time-consuming. This presents a great need for the active learning approaches to select most informative examples for ranking learning; however, in the literature there is still very limited work to address active learning for ranking. In this paper, we propose a general active learning framework, Expected Loss Optimization (ELO), for ranking. The ELO framework is applicable to a wide range of ranking functions. Under this framework, we derive a novel algorithm, Expected DCG Loss Optimization (ELO-DCG), to select most informative examples. Furthermore, we investigate both query and document level active learning for raking and propose a two-stage ELO-DCG algorithm which incorporate both query and document selection into active learning. Extensive experiments on real-world Web search data sets have demonstrated great potential and effective-ness of the proposed framework and algorithms.",
"This study examines the evidence for the effectiveness of active learning. It defines the common forms of active learning most relevant for engineering faculty and critically examines the core element of each method. It is found that there is broad but uneven support for the core elements of active, collaborative, cooperative and problem-based learning.",
"",
"We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms."
]
} |
1512.03880 | 2205115547 | Recent years have witnessed amazing outcomes from "Big Models" trained by "Big Data". Most popular algorithms for model training are iterative. Due to the surging volumes of data, we can usually afford to process only a fraction of the training data in each iteration. Typically, the data are either uniformly sampled or sequentially accessed. In this paper, we study how the data access pattern can affect model training. We propose an Active Sampler algorithm, where training data with more "learning value" to the model are sampled more frequently. The goal is to focus training effort on valuable instances near the classification boundaries, rather than evident cases, noisy data or outliers. We show the correctness and optimality of Active Sampler in theory, and then develop a light-weight vectorized implementation. Active Sampler is orthogonal to most approaches optimizing the efficiency of large-scale data analytics, and can be applied to most analytics models trained by stochastic gradient descent (SGD) algorithm. Extensive experimental evaluations demonstrate that Active Sampler can speed up the training procedure of SVM, feature selection and deep learning, for comparable training quality by 1.6-2.2x. | Active Sampler is also related to feature selection methods @cite_30 . Both of them assume that not all the training data are informative for model construction. The difference is that feature selection methods find the most informative columns in the training data, whereas Active Sampler finds the most informative rows. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2099102906"
],
"abstract": [
"There is an arms race in the data management industry to support analytics, in which one critical step is feature selection, the process of selecting a feature set that will be used to build a statistical model. Analytics is one of the biggest topics in data management, and feature selection is widely regarded as the most critical step of analytics; thus, we argue that managing the feature selection process is a pressing data management challenge. We study this challenge by describing a feature-selection language and a supporting prototype system that builds on top of current industrial, R-integration layers. From our interactions with analysts, we learned that feature selection is an interactive, human-in-the-loop process, which means that feature selection workloads are rife with reuse opportunities. Thus, we study how to materialize portions of this computation using not only classical database materialization optimizations but also methods that have not previously been used in database optimization, including structural decomposition methods (like QR factorization) and warmstart. These new methods have no analog in traditional SQL systems, but they may be interesting for array and scientific database applications. On a diverse set of data sets and programs, we find that traditional database-style approaches that ignore these new opportunities are more than two orders of magnitude slower than an optimal plan in this new tradeoff space across multiple R-backends. Furthermore, we show that it is possible to build a simple cost-based optimizer to automatically select a near-optimal execution plan for feature selection."
]
} |
1512.04170 | 2269698647 | Goemans showed that any @math points @math in @math -dimensions satisfying @math triangle inequalities can be embedded into @math , with worst-case distortion at most @math . We extend this to the case when the points are approximately low-dimensional, albeit with average distortion guarantees. More precisely, we give an @math -to- @math embedding with average distortion at most the stable rank, @math , of the matrix @math consisting of columns @math . Average distortion embedding suffices for applications such as the Sparsest Cut problem. Our embedding gives an approximation algorithm for the problem on low threshold-rank graphs, where earlier work was inspired by Lasserre SDP hierarchy, and improves on a previous result of the first and third author [Deshpande and Venkat, In Proc. 17th APPROX, 2014]. Our ideas give a new perspective on @math metric, an alternate proof of Goemans' theorem, and a simpler proof for average distortion @math . Furthermore, while the seminal result of Arora, Rao and Vazirani giving a @math guarantee for Uniform Sparsest Cut can be seen to imply Goemans' theorem with average distortion, our work opens up the possibility of proving such a result directly via a Goemans'-like theorem. | We recall that the best known upper bound for the worst-case distortion of embedding @math is @math @cite_11 @cite_5 , while the best known lower bound is @math for worst-case distortion @cite_18 , and @math for average distortion @cite_7 . Guarantees to on graphs were obtained using higher levels of the Lasserre hierarchy for SDPs @cite_12 @cite_13 . In contrast, a previous work of the first and third author @cite_17 showed weaker guarantees, but using just the basic SDP relaxation. Oveis Gharan and Trevisan @cite_6 also give a rounding algorithm for the basic SDP relaxation on low-threshold rank graphs, but require a stricter pre-condition on the eigenvalues ( @math ), and leverage it to give a stronger @math -approximation guarantees. Their improvement comes from a new structure theorem on the SDP solutions of low threshold-rank graphs being clustered, and using the techniques in ARV for analysis. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_17",
"@cite_6",
"@cite_5",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2147668104",
"",
"1571696081",
"1818659651",
"2154876245",
"2952056253",
"2949393806",
""
],
"abstract": [
"We show that the Goemans-Linial semidefinite relaxation of the Sparsest Cut problem with general demands has integrality gap @math . This is achieved by exhibiting @math -point metric spaces of negative type whose @math distortion is @math . Our result is based on quantitative bounds on the rate of degeneration of Lipschitz maps from the Heisenberg group to @math when restricted to cosets of the center.",
"",
"Guruswami and Sinop give a @math approximation guarantee for the non-uniform Sparsest Cut problem by solving @math -level Lasserre semidefinite constraints, provided that the generalized eigenvalues of the Laplacians of the cost and demand graphs satisfy a certain spectral condition, namely, @math . Their key idea is a rounding technique that first maps a vector-valued solution to @math using appropriately scaled projections onto Lasserre vectors. In this paper, we show that similar projections and analysis can be obtained using only @math triangle inequality constraints. This results in a @math approximation guarantee for the non-uniform Sparsest Cut problem by adding only @math triangle inequality constraints to the usual semidefinite program, provided that the same spectral condition, @math , holds.",
"We prove a structure theorem for the feasible solutions of the Arora-Rao-Vazirani SDP relaxation on low threshold rank graphs and on small-set expanders. We show that if G is a graph of bounded threshold rank or a small-set expander, then an optimal solution of the Arora-Rao-Vazirani relaxation (or of any stronger version of it) can be almost entirely covered by a small number of balls of bounded radius. Then, we show that, if k is the number of balls, a solution of this form can be rounded with an approximation factor of O(sqrt log k ) in the case of the Arora-Rao-Vazirani relaxation, and with a constant-factor approximation in the case of the k-th round of the Sherali-Adams hierarchy starting at the Arora-Rao-Vazirani relaxation. The structure theorem and the rounding scheme combine to prove the following result, where G=(V,E) is a graph of expansion (G), is the k-th smallest eigenvalue of the normalized Laplacian of G, and (G) = disjoint S_1,...,S_k 1 > log^ 2.5 k phi(G) or k (G) >> log k sqrt log n loglog n (G), then the Arora-Rao-Vazirani relaxation can be rounded in polynomial time with an approximation ratio O(sqrt log k ). Stronger approximation guarantees are achievable in time exponential in k via relaxations in the Lasserre hierarchy. Guruswami and Sinop [GS13] and Arora, Ge and Sinop [AGS13] prove that 1+eps approximation is achievable in time 2^ O(k) poly(n) if either > (G) poly(eps), or if SSE_ n k > sqrt log k log n (G) poly(eps), where SSE_s is the minimal expansion of sets of size at most s.",
"",
"We give an approximation algorithm for non-uniform sparsest cut with the following guarantee: For any @math , given cost and demand graphs with edge weights @math respectively, we can find a set @math with @math at most @math times the optimal non-uniform sparsest cut value, in time @math provided @math . Here @math is the @math 'th smallest generalized eigenvalue of the Laplacian matrices of cost and demand graphs; @math (resp. @math ) is the weight of edges crossing the @math cut in cost (resp. demand) graph and @math is the sparsity of the optimal cut. In words, we show that the non-uniform sparsest cut problem is easy when the generalized spectrum grows moderately fast. To the best of our knowledge, there were no results based on higher order spectra for non-uniform sparsest cut prior to this work. Even for uniform sparsest cut, the quantitative aspects of our result are somewhat stronger than previous methods. Similar results hold for other expansion measures like edge expansion, normalized cut, and conductance, with the @math 'th smallest eigenvalue of the normalized Laplacian playing the role of @math in the latter two cases. Our proof is based on an l1-embedding of vectors from a semi-definite program from the Lasserre hierarchy. The embedded vectors are then rounded to a cut using standard threshold rounding. We hope that the ideas connecting @math -embeddings to Lasserre SDPs will find other applications. Another aspect of the analysis is the adaptation of the column selection paradigm from our earlier work on rounding Lasserre SDPs [GS11] to pick a set of edges rather than vertices. This feature is important in order to extend the algorithms to non-uniform sparsest cut.",
"We show a new way to round vector solutions of semidefinite programming (SDP) hierarchies into integral solutions, based on a connection between these hierarchies and the spectrum of the input graph. We demonstrate the utility of our method by providing a new SDP-hierarchy based algorithm for constraint satisfaction problems with 2-variable constraints (2-CSP's). More concretely, we show for every 2-CSP instance I a rounding algorithm for r rounds of the Lasserre SDP hierarchy for I that obtains an integral solution that is at most worse than the relaxation's value (normalized to lie in [0,1]), as long as r > k ( ) ( ) ;, where k is the alphabet size of I, @math , and @math denotes the number of eigenvalues larger than @math in the normalized adjacency matrix of the constraint graph of @math . In the case that @math is a instance, the threshold @math is only a polynomial in @math , and is independent of the alphabet size. Also in this case, we can give a non-trivial bound on the number of rounds for instance. In particular our result yields an SDP-hierarchy based algorithm that matches the performance of the recent subexponential algorithm of Arora, Barak and Steurer (FOCS 2010) in the worst case, but runs faster on a natural family of instances, thus further restricting the set of possible hard instances for Khot's Unique Games Conjecture. Our algorithm actually requires less than the @math constraints specified by the @math level of the Lasserre hierarchy, and in some cases @math rounds of our program can be evaluated in time @math .",
""
]
} |
1512.04170 | 2269698647 | Goemans showed that any @math points @math in @math -dimensions satisfying @math triangle inequalities can be embedded into @math , with worst-case distortion at most @math . We extend this to the case when the points are approximately low-dimensional, albeit with average distortion guarantees. More precisely, we give an @math -to- @math embedding with average distortion at most the stable rank, @math , of the matrix @math consisting of columns @math . Average distortion embedding suffices for applications such as the Sparsest Cut problem. Our embedding gives an approximation algorithm for the problem on low threshold-rank graphs, where earlier work was inspired by Lasserre SDP hierarchy, and improves on a previous result of the first and third author [Deshpande and Venkat, In Proc. 17th APPROX, 2014]. Our ideas give a new perspective on @math metric, an alternate proof of Goemans' theorem, and a simpler proof for average distortion @math . Furthermore, while the seminal result of Arora, Rao and Vazirani giving a @math guarantee for Uniform Sparsest Cut can be seen to imply Goemans' theorem with average distortion, our work opens up the possibility of proving such a result directly via a Goemans'-like theorem. | Kwok et al @cite_4 showed that a better analysis of Cheeger's inequality gives a @math approximation to the sparsest cut in @math -regular graphs. In particular, when @math , this gives a @math approximation for the problem. In this regime, our result gives a slightly better approximation: Assuming @math , if @math then @math yielding an @math approximation by thm:SparsestCut . Otherwise, if @math , then running a Cheeger rounding on the SDP solution would itself give a cut of sparsity @math . Thus, the better of our rounding algorithm and a Cheeger rounding on the SDP solution gives a @math -approximation to the problem. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2950405280"
],
"abstract": [
"Let (G) be the minimum conductance of an undirected graph G, and let 0= = 2, (G) = O(k) , and this performance guarantee is achieved by the spectral partitioning algorithm. This improves Cheeger's inequality, and the bound is optimal up to a constant factor for any k. Our result shows that the spectral partitioning algorithm is a constant factor approximation algorithm for finding a sparse cut if $ is a constant for some constant k. This provides some theoretical justification to its empirical performance in image segmentation and clustering problems. We extend the analysis to other graph partitioning problems, including multi-way partition, balanced separator, and maximum cut."
]
} |
1512.04103 | 2292644546 | Visual attributes are great means of describing images or scenes, in a way both humans and computers understand. In order to establish a correspondence between images and to be able to compare the strength of each property between images, relative attributes were introduced. However, since their introduction, hand-crafted and engineered features were used to learn increasingly complex models for the problem of relative attributes. This limits the applicability of those methods for more realistic cases. We introduce a deep neural network architecture for the task of relative attribute prediction. A convolutional neural network (ConvNet) is adopted to learn the features by including an additional layer (ranking layer) that learns to rank the images based on these features. We adopt an appropriate ranking loss to train the whole network in an end-to-end fashion. Our proposed method outperforms the baseline and state-of-the-art methods in relative attribute prediction on various coarse and fine-grained datasets. Our qualitative results along with the visualization of the saliency maps show that the network is able to learn effective features for each specific attribute. Source code of the proposed method is available at this https URL | Neural networks have also been extended for learning-to-rank applications. One of the earliest networks for ranking was proposed by Burges al @cite_42 , known as RankNet. The underlying model in RankNet maps an input feature vector to a Real number. The model is trained by presenting the network pairs of input training feature vectors with differing labels. Then, based on how they should be ranked, the underlying model parameters are updated. This model is used in different fields for ranking and retrieval applications, , for personalized search @cite_22 or content-based image retrieval @cite_23 . In another work, Yao al @cite_29 proposed a ranking framework for videos for first-person video summarization, through recognizing video highlights. They incorporated both spatial and temporal streams through 2D and 3D CNNs and detect the video highlights. | {
"cite_N": [
"@cite_29",
"@cite_42",
"@cite_22",
"@cite_23"
],
"mid": [
"2467794422",
"2143331230",
"2105059961",
"2123229215"
],
"abstract": [
"The emergence of wearable devices such as portable cameras and smart glasses makes it possible to record life logging first-person videos. Browsing such long unstructured videos is time-consuming and tedious. This paper studies the discovery of moments of user's major or special interest (i.e., highlights) in a video, for generating the summarization of first-person videos. Specifically, we propose a novel pairwise deep ranking model that employs deep learning techniques to learn the relationship between high-light and non-highlight video segments. A two-stream network structure by representing video segments from complementary information on appearance of video frames and temporal dynamics across frames is developed for video highlight detection. Given a long personal video, equipped with the highlight detection model, a highlight score is assigned to each segment. The obtained highlight segments are applied for summarization in two ways: video time-lapse and video skimming. The former plays the highlight (non-highlight) segments at low (high) speed rates, while the latter assembles the sequence of segments with the highest scores. On 100 hours of first-person videos for 15 unique sports categories, our highlight detection achieves the improvement over the state-of-the-art RankSVM method by 10.5 in terms of accuracy. Moreover, our approaches produce video summary with better quality by a user study from 35 human subjects.",
"We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data from a commercial internet search engine.",
"RankNet is one of the widely adopted ranking models for web search tasks. However, adapting a generic RankNet for personalized search is little studied. In this paper, we first continue-trained a variety of RankNets with different number of hidden layers and network structures over a previously trained global RankNet model, and observed that a deep neural network with five hidden layers gives the best performance. To further improve the performance of adaptation, we propose a set of novel methods categorized into two groups. In the first group, three methods are proposed to properly assess the usefulness of each adaptation instance and only leverage the most informative instances to adapt a user-specific RankNet model. These assessments are based on KL-divergence, click entropy or a heuristic to ignore top clicks in adaptation queries. In the second group, two methods are proposed to regularize the training of the neural network in RankNet: one of these methods regularize the error back-propagation via a truncated gradient approach, while the other method limits the depth of the back propagation when adapting the neural network. We empirically evaluate our approaches using a large-scale real-world data set. Experimental results exhibit that our methods all give significant improvements over a strong baseline ranking system, and the truncated gradient approach gives the best performance, significantly better than all others.",
"Learning effective feature representations and similarity measures are crucial to the retrieval performance of a content-based image retrieval (CBIR) system. Despite extensive research efforts for decades, it remains one of the most challenging open problems that considerably hinders the successes of real-world CBIR systems. The key challenge has been attributed to the well-known semantic gap'' issue that exists between low-level image pixels captured by machines and high-level semantic concepts perceived by human. Among various techniques, machine learning has been actively investigated as a possible direction to bridge the semantic gap in the long term. Inspired by recent successes of deep learning techniques for computer vision and other applications, in this paper, we attempt to address an open problem: if deep learning is a hope for bridging the semantic gap in CBIR and how much improvements in CBIR tasks can be achieved by exploring the state-of-the-art deep learning techniques for learning feature representations and similarity measures. Specifically, we investigate a framework of deep learning with application to CBIR tasks with an extensive set of empirical studies by examining a state-of-the-art deep learning method (Convolutional Neural Networks) for CBIR tasks under varied settings. From our empirical studies, we find some encouraging results and summarize some important insights for future research."
]
} |
1512.04150 | 2950328304 | In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them | Convolutional Neural Networks (CNNs) have led to impressive performance on a variety of visual recognition tasks @cite_2 @cite_9 @cite_23 . Recent work has shown that despite being trained on image-level labels, CNNs have the remarkable ability to localize objects @cite_5 @cite_28 @cite_33 @cite_4 . In this work, we show that, using the right architecture, we can generalize this ability beyond just localizing objects, to start identifying exactly which regions of an image are being used for discrimination. Here, we discuss the two lines of work most related to this paper: weakly-supervised object localization and visualizing the internal representation of CNNs. | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_28",
"@cite_9",
"@cite_23",
"@cite_2",
"@cite_5"
],
"mid": [
"2161381512",
"2133324800",
"1994488211",
"2134670479",
"2102605133",
"",
"2951505120"
],
"abstract": [
"Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.",
"Object category localization is a challenging problem in computer vision. Standard supervised training requires bounding box annotations of object instances. This time-consuming annotation process is sidestepped in weakly supervised learning. In this case, the supervised information is restricted to binary labels that indicate the absence presence of object instances in the image, without their locations. We follow a multiple-instance learning approach that iteratively trains the detector and infers the object locations in the positive training images. Our main contribution is a multi-fold multiple instance learning procedure, which prevents training from prematurely locking onto erroneous object locations. This procedure is particularly important when using high-dimensional representations, such as Fisher vectors and convolutional neural network features. We also propose a window refinement method, which improves the localization accuracy by incorporating an objectness prior. We present a detailed experimental evaluation using the PASCAL VOC 2007 dataset, which verifies the effectiveness of our approach.",
"Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.",
"Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success. This may be because current deep features trained from ImageNet are not competitive enough for such tasks. Here, we introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity. Using CNN, we learn deep features for scene recognition tasks, and establish new state-of-the-art results on several scene-centric datasets. A visualization of the CNN layers' responses allows us to show differences in the internal representations of object-centric and scene-centric networks.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"",
"This paper introduces self-taught object localization, a novel approach that leverages deep convolutional networks trained for whole-image recognition to localize objects in images without additional human supervision, i.e., without using any ground-truth bounding boxes for training. The key idea is to analyze the change in the recognition scores when artificially masking out different regions of the image. The masking out of a region that includes the object typically causes a significant drop in recognition score. This idea is embedded into an agglomerative clustering technique that generates self-taught localization hypotheses. Our object localization scheme outperforms existing proposal methods in both precision and recall for small number of subwindow proposals (e.g., on ILSVRC-2012 it produces a relative gain of 23.4 over the state-of-the-art for top-1 hypothesis). Furthermore, our experiments show that the annotations automatically-generated by our method can be used to train object detectors yielding recognition results remarkably close to those obtained by training on manually-annotated bounding boxes."
]
} |
1512.04150 | 2950328304 | In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them | There have been a number of recent works exploring weakly-supervised object localization using CNNs @cite_5 @cite_28 @cite_33 @cite_4 . Bergamo @cite_5 propose a technique for self-taught object localization involving masking out image regions to identify the regions causing the maximal activations in order to localize objects. Cinbis @cite_33 combine multiple-instance learning with CNN features to localize objects. Oquab @cite_4 propose a method for transferring mid-level image representations and show that some object localization can be achieved by evaluating the output of CNNs on multiple overlapping patches. However, the authors do not actually evaluate the localization ability. On the other hand, while these approaches yield promising results, they are not trained end-to-end and require multiple forward passes of a network to localize objects, making them difficult to scale to real-world datasets. Our approach is trained end-to-end and can localize objects in a single forward pass. | {
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_4",
"@cite_33"
],
"mid": [
"1994488211",
"2951505120",
"2161381512",
"2133324800"
],
"abstract": [
"Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.",
"This paper introduces self-taught object localization, a novel approach that leverages deep convolutional networks trained for whole-image recognition to localize objects in images without additional human supervision, i.e., without using any ground-truth bounding boxes for training. The key idea is to analyze the change in the recognition scores when artificially masking out different regions of the image. The masking out of a region that includes the object typically causes a significant drop in recognition score. This idea is embedded into an agglomerative clustering technique that generates self-taught localization hypotheses. Our object localization scheme outperforms existing proposal methods in both precision and recall for small number of subwindow proposals (e.g., on ILSVRC-2012 it produces a relative gain of 23.4 over the state-of-the-art for top-1 hypothesis). Furthermore, our experiments show that the annotations automatically-generated by our method can be used to train object detectors yielding recognition results remarkably close to those obtained by training on manually-annotated bounding boxes.",
"Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.",
"Object category localization is a challenging problem in computer vision. Standard supervised training requires bounding box annotations of object instances. This time-consuming annotation process is sidestepped in weakly supervised learning. In this case, the supervised information is restricted to binary labels that indicate the absence presence of object instances in the image, without their locations. We follow a multiple-instance learning approach that iteratively trains the detector and infers the object locations in the positive training images. Our main contribution is a multi-fold multiple instance learning procedure, which prevents training from prematurely locking onto erroneous object locations. This procedure is particularly important when using high-dimensional representations, such as Fisher vectors and convolutional neural network features. We also propose a window refinement method, which improves the localization accuracy by incorporating an objectness prior. We present a detailed experimental evaluation using the PASCAL VOC 2007 dataset, which verifies the effectiveness of our approach."
]
} |
1512.04150 | 2950328304 | In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them | The most similar approach to ours is the work based on global max pooling by Oquab @cite_28 . Instead of global pooling, they apply global pooling to localize a point on objects. However, their localization is limited to a point lying in the boundary of the object rather than determining the full extent of the object. We believe that while the and functions are rather similar, the use of average pooling encourages the network to identify the complete extent of the object. The basic intuition behind this is that the loss for average pooling benefits when the network identifies discriminative regions of an object as compared to max pooling. This is explained in greater detail and verified experimentally in Sec. . Furthermore, unlike @cite_28 , we demonstrate that this localization ability is generic and can be observed even for problems that the network was not trained on. | {
"cite_N": [
"@cite_28"
],
"mid": [
"1994488211"
],
"abstract": [
"Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training."
]
} |
1512.04150 | 2950328304 | In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them | There has been a number of recent works @cite_10 @cite_27 @cite_31 @cite_3 that visualize the internal representation learned by CNNs in an attempt to better understand their properties. Zeiler @cite_10 use deconvolutional networks to visualize what patterns activate each unit. Zhou @cite_3 show that CNNs learn object detectors while being trained to recognize scenes, and demonstrate that the same network can perform both scene recognition and object localization in a single forward-pass. Both of these works only analyze the convolutional layers, ignoring the fully-connected thereby painting an incomplete picture of the full story. By removing the fully-connected layers and retaining most of the performance, we are able to understand our network from the beginning to the end. | {
"cite_N": [
"@cite_31",
"@cite_27",
"@cite_10",
"@cite_3"
],
"mid": [
"2273348943",
"2949987032",
"2952186574",
"1899185266"
],
"abstract": [
"Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.",
"Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects."
]
} |
1512.04150 | 2950328304 | In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them | Mahendran @cite_27 and Dosovitskiy @cite_31 analyze the visual encoding of CNNs by inverting deep features at different layers. While these approaches can invert the fully-connected layers, they only show what information is being preserved in the deep features without highlighting the relative importance of this information. Unlike @cite_27 and @cite_31 , our approach can highlight exactly which regions of an image are important for discrimination. Overall, our approach provides another glimpse into the soul of CNNs. | {
"cite_N": [
"@cite_27",
"@cite_31"
],
"mid": [
"2949987032",
"2273348943"
],
"abstract": [
"Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.",
"Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities."
]
} |
1512.04089 | 2377389173 | Full duplex communication promises a paradigm shift in wireless networks by allowing simultaneous packet transmission and reception within the same channel. While recent prototypes indicate the feasibility of this concept, there is a lack of rigorous theoretical development on how full duplex impacts medium access control (MAC) protocols in practical wireless networks. In this paper, we formulate the first analytical model of a CSMA CA based full duplex MAC protocol for a wireless LAN network composed of an access point serving mobile clients. There are two major contributions of our work: First, our Markov chain-based approach results in closed form expressions of throughput for both the access point and the clients for this new class of networks. Second, our study provides quantitative insights on how much of the classical hidden terminal problem can be mitigated through full duplex. We specifically demonstrate that the improvement in the network throughput is up to 35-40 percent over the half duplex case. Our analytical models are verified through packet level simulations in ns-2. Our results also reveal the benefit of full duplex under varying network configuration parameters, such as number of hidden terminals, client density, and contention window size. | There has been some recent effort in characterizing FD's performance from a theoretical standpoint. In @cite_11 , achievable throughput in full-duplex is characterized as opposed to other channel access schemes such as MIMO and MU-MIMO. In @cite_24 , theoretical bounds for full-duplex gain over half-duplex has been derived for various topologies as a function of difference between transmission and interference range. It has been shown that when these two ranges are equal for a randomly deployed ad hoc network the asymptotic bound for full-duplex gain is only $28 , none of these works consider a methematical modeling of a real-world FD protocol. Our work serves in bridging this gap, and we use the CSMA CA based MAC protocol in @cite_17 implemented on physical hardware, as the base protocol with few additional modifications to its busy tone broadcasting scenarios. In this work, we extend the model in @cite_22 , which is an accurate analytical model of a saturated IEEE 802.11 DCF network with no hidden terminals, to the full-duplex medium access network with the presence of hidden terminals. | {
"cite_N": [
"@cite_24",
"@cite_22",
"@cite_17",
"@cite_11"
],
"mid": [
"2014318378",
"2011228372",
"2128938148",
"2068889607"
],
"abstract": [
"Full-duplex has emerged as a new communication paradigm and is anticipated to double wireless capacity. Existing studies of full-duplex mainly focused on its PHY layer design, which enables bidirectional transmission between a single pair of nodes. In this paper, we establish an analytical framework to quantify the network-level capacity gain of full-duplex over halfduplex. Our analysis reveals that inter-link interference and spatial reuse substantially reduces full-duplex gain, rendering it well below 2 in common cases. More remarkably, the asymptotic gain approaches 1 when interference range approaches transmission range. Through a comparison between optimal halfand fullduplex MAC algorithms, we find that full-duplex’s gain is further reduced when it is applied to CSMA based wireless networks. Our analysis provides important guidelines for designing full-duplex networks. In particular, network-level mechanisms such as spatial reuse and asynchronous contention must be carefully addressed in full-duplex based protocols, in order to translate full-duplex’s PHY layer capacity gain into network throughput improvement.",
"In this paper, a unified analytical framework is established to study the stability, throughput, and delay performance of homogeneous buffered IEEE 802.11 networks with Distributed Coordination Function (DCF). Two steady-state operating points are characterized using the limiting probability of successful transmission of Head-of-Line (HOL) packets p given that the network is in unsaturated or saturated conditions. The analysis shows that a buffered IEEE 802.11 DCF network operates at the desired stable point p=pL if it is unsaturated. pL does not vary with backoff parameters, and a stable throughput can be always achieved at pL. If the network becomes saturated, in contrast, it operates at the undesired stable point p=pA, and a stable throughput can be achieved at pA if and only if the backoff parameters are properly selected. The stable regions of the backoff factor q and the initial backoff window size W are derived, and illustrated in cases of the basic access mechanism and the request-to-send clear-to-send (RTS CTS) mechanism. It is shown that the stable regions are significantly enlarged with the RTS CTS mechanism, indicating that networks in the RTS CTS mode are much more robust. Nevertheless, the delay analysis further reveals that lower access delay is incurred in the basic access mode for unsaturated networks. If the network becomes saturated, the delay performance deteriorates regardless of which mode is chosen. Both the first and the second moments of access delay at pA are sensitive to the backoff parameters, and shown to be effectively reduced by enlarging the initial backoff window size W.",
"This paper presents a full duplex radio design using signal inversion and adaptive cancellation. Signal inversion uses a simple design based on a balanced unbalanced (Balun) transformer. This new design, unlike prior work, supports wideband and high power systems. In theory, this new design has no limitation on bandwidth or power. In practice, we find that the signal inversion technique alone can cancel at least 45dB across a 40MHz bandwidth. Further, combining signal inversion cancellation with cancellation in the digital domain can reduce self-interference by up to 73dB for a 10MHz OFDM signal. This paper also presents a full duplex medium access control (MAC) design and evaluates it using a testbed of 5 prototype full duplex nodes. Full duplex reduces packet losses due to hidden terminals by up to 88 . Full duplex also mitigates unfair channel allocation in AP-based networks, increasing fairness from 0.85 to 0.98 while improving downlink throughput by 110 and uplink throughput by 15 . These experimental results show that a re- design of the wireless network stack to exploit full duplex capability can result in significant improvements in network performance.",
"Recent breakthroughs in wireless communica- tion show that by using new signal processing techniques, a wireless node is capable of transmitting and receiving simultaneously on the same frequency band by activating both of its RF chains, thus achieving full-duplex communica- tion and potentially doubling the link throughput. However, with two sets of RF chains, one can build a half-duplex multi-input and multi-output (MIMO) system that achieves the same gain. While this gain is the same between a pair of nodes, the gains are unclear when multiple nodes are involved, as in a general network. The key reason is that MIMO and full-duplex have different interference patterns. A MIMO transmission blocks transmissions around its receiver and receptions around its transmitter. A full-duplex bi- directional transmission blocks any transmission around the two communicating nodes, but allows a reception on one RF chain. Thus, in a general network, the requirements for the two technologies could result in potentially different achievable throughput regions. This work investigates the achievable throughput perfor- mance of MIMO, full-duplex and their variants that allow simultaneous activation of two RF chains. It is the first work of its kind to precisely characterize the conditions under which these technologies outperform each other for a general network topology under a binary interference model. The analytical results in this paper are validated using software-defined radios."
]
} |
1512.04097 | 2284989774 | It is widely acknowledged that function symbols are an important feature in answer set programming, as they make modeling easier, increase the expressive power, and allow us to deal with infinite domains. The main issue with their introduction is that the evaluation of a program might not terminate and checking whether it terminates or not is undecidable. To cope with this problem, several classes of logic programs have been proposed where the use of function symbols is restricted but the program evaluation termination is guaranteed. Despite the significant body of work in this area, current approaches do not include many simple practical programs whose evaluation terminates. In this paper, we present the novel classes of rule-bounded and cycle-bounded programs, which overcome different limitations of current approaches by performing a more global analysis of how terms are propagated from the body to the head of rules. Results on the correctness, the complexity, and the expressivity of the proposed approach are provided. | In this paper, we consider logic programs with function symbols @cite_51 @cite_31 (recall that, as discussed in , our approach can be applied to programs with disjunction and negation by transforming them into positive normal programs), and thus all the excellent works above cannot be straightforwardly applied to our setting---for a discussion on this see, e.g., @cite_52 @cite_19 . In our context, @cite_52 introduced the class of , guaranteeing the existence of a finite set of stable models, each of finite size, for programs in the class. Since membership in the class is not decidable, decidable subclasses have been proposed: , , , , , , , and . An adornment-based approach that can be used in conjunction with the techniques above to detect more programs as finitely-ground has been proposed in @cite_20 . This paper refines and extends @cite_6 . | {
"cite_N": [
"@cite_52",
"@cite_6",
"@cite_19",
"@cite_31",
"@cite_51",
"@cite_20"
],
"mid": [
"1484110122",
"637774",
"2013130775",
"2076698873",
"1672891595",
"2085386651"
],
"abstract": [
"Disjunctive Logic Programming (DLP) under the answer set semantics, often referred to as Answer Set Programming (ASP), is a powerful formalism for knowledge representation and reasoning (KRR). The latest years witness an increasing effort for embedding functions in the context of ASP. Nevertheless, at present no ASP system allows for a reasonably unrestricted use of function terms. Functions are either required not to be recursive or subject to severe syntactic limitations, if allowed at all in ASP systems. In this work we formally define the new class of finitely-ground programs, allowing for a powerful (possibly recursive) use of function terms in the full ASP language with disjunction and negation. We demonstrate that finitely-ground programs have nice computational properties: (i) both brave and cautious reasoning are decidable, and (ii) answer sets of finitely-ground programs are computable. Moreover, the language is highly expressive, as any computable function can be encoded by a finitely-ground program. Due to the high expressiveness, membership in the class of finitely-ground program is clearly not decidable (we prove that it is semi-decidable). We single out also a subset of finitely-ground programs, called finite-domain programs, which are effectively recognizable, while keeping computability of both reasoning and answer set computation. We implement all results in DLP, further extending the language in order to support list and set terms, along with a rich library of built-in functions for their manipulation. The resulting ASP system is very powerful: any computable function can be encoded in a rich and fully declarative KRR language, ensuring termination on every finitely-ground program. In addition, termination is \"a priori\" guaranteed if the user asks for the finite-domain check.",
"Enriching answer set programming with function symbols makes modeling easier, increases the expressive power, and allows us to deal with infinite domains. However, this comes at a cost: common inference tasks become undecidable. To cope with this issue, recent research has focused on finding trade-offs between expressivity and decidability by identifying classes of logic programs that impose limitations on the use of function symbols but guarantee decidability of common inference tasks. Despite the significant body of work in this area, current approaches do not include many simple practical programs whose evaluation terminates. In this paper, we present the novel class of rule-bounded programs. While current techniques perform a limited analysis of how terms are propagated from an individual argument to another, our technique is able to perform a more global analysis, thereby overcoming several limitations of current approaches. We also present a further class of cycle-bounded programs where groups of rules are analyzed together. We show different results on the correctness and the expressivity of the proposed techniques.",
"Querying over disjunctive ASP with functions is a highly undecidable task in general. In this paper we focus on disjunctive logic programs with stratified negation and functions under the stable model semantics (ASPfs). We show that query answering in this setting is decidable, if the query is finitely recursive (ASPfsfr). Our proof yields also an effective method for query evaluation. It is done by extending the magic set technique to ASPfsfr. We show that the magic-set rewritten program is query equivalent to the original one (under both brave and cautious reasoning). Moreover, we prove that the rewritten program is also finitely ground, implying that it is decidable. Importantly, finitely ground programs are evaluable using existing ASP solvers, making the class of ASPfsfr queries usable in practice.",
"An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available. Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.",
"",
""
]
} |
1512.04097 | 2284989774 | It is widely acknowledged that function symbols are an important feature in answer set programming, as they make modeling easier, increase the expressive power, and allow us to deal with infinite domains. The main issue with their introduction is that the evaluation of a program might not terminate and checking whether it terminates or not is undecidable. To cope with this problem, several classes of logic programs have been proposed where the use of function symbols is restricted but the program evaluation termination is guaranteed. Despite the significant body of work in this area, current approaches do not include many simple practical programs whose evaluation terminates. In this paper, we present the novel classes of rule-bounded and cycle-bounded programs, which overcome different limitations of current approaches by performing a more global analysis of how terms are propagated from the body to the head of rules. Results on the correctness, the complexity, and the expressivity of the proposed approach are provided. | Similar concepts of term size'' have been considered to check termination of logic programs evaluated in a top-down fashion @cite_23 , to check local stratification of logic programs @cite_47 , in the context of partial evaluation to provide conditions for strong termination and quasi-termination @cite_32 @cite_2 , and in the context of tabled resolution @cite_24 @cite_36 . These approaches are geared to work under top-down evaluation, looking at how terms are propagated from the head to the body, while our approach is developed to work under bottom-up evaluation, looking at how terms are propagated from the body to the head. This gives rise to significant differences in how the program analysis is carried out, making one approach not applicable in the setting of the other. As a simple example, the rule @math leads to a non-terminating top-down evaluation, while it is completely harmless under bottom-up evaluation. | {
"cite_N": [
"@cite_36",
"@cite_32",
"@cite_24",
"@cite_23",
"@cite_2",
"@cite_47"
],
"mid": [
"2108205915",
"2034471164",
"2963676309",
"1586790674",
"2044152416",
"2078854256"
],
"abstract": [
"As evaluation methods for logic programs have become more sophisticated, the classes of programs for which termination can be guaranteed have expanded. From the perspective of ar set programs that include function symbols, recent work has identified classes for which grounding routines can terminate either on the entire program [ 2008] or on suitable queries [ 2009]. From the perspective of tabling, it has long been known that a tabling technique called subgoal abstraction provides good termination properties for definite programs [Tamaki and Sato 1986], and this result was recently extended to stratified programs via the class of bounded term-size programs [Riguzzi and Swift 2013]. In this article, we provide a formal definition of tabling with subgoal abstraction resulting in the SLGSA algorithm. Moreover, we discuss a declarative characterization of the queries and programs for which SLGSA terminates. We call this class strongly bounded term-size programs and show its equivalence to programs with finite well-founded models. For normal programs, strongly bounded term-size programs strictly includes the finitely ground programs of [2008]. SLGSA has an asymptotic complexity on strongly bounded term-size programs equal to the best known and produces a residual program that can be sent to an answer set programming system. Finally, we describe the implementation of subgoal abstraction within the SLG-WAM of XSB and provide performance results.",
"One of the most important challenges in partial evaluation is the design of automatic methods for ensuring the termination of specialisation. It is well known that the termination of partial evaluation can be ensured when the considered computations are quasiterminating, i.e., when only finitely many different calls occur. In this work, we adapt the use of the so called size-change graphs to logic programming and introduce new sufficient conditions for strong (i.e., w.r.t. any computation rule) termination and quasi-termination. To the best of our knowledge, this is the first sufficient condition for the strong quasi-termination of logic programs. The class of strongly quasi-terminating logic programs, however, is too restrictive. Therefore, we also introduce an annotation procedure that combines the information from size-change graphs and the output of a traditional binding-time analysis. Annotated programs can then be used to guarantee termination of partial evaluation. We finally illustrate the usefulness of our approach by designing a simple partial evaluator in which termination is always ensured offline (i.e., statically).",
"The distribution semantics is one of the most prominent approaches for the combination of logic programming and probability theory. Many languages follow this semantics, such as Independent Choice Logic, PRISM, pD, Logic Programs with Annotated Disjunctions (LPADs) and ProbLog. When a program contains functions symbols, the distribution semantics is well-defined only if the set of explanations for a query is finite and so is each explanation. Welldefinedness is usually either explicitly imposed or is achieved by severely limiting the class of allowed programs. In this paper we identify a larger class of programs for which the semantics is well-defined together with an efficient procedure for computing the probability of queries. Since LPADs offer the most general syntax, we present our results for them, but our results are applicable to all languages under the distribution semantics. We present the algorithm “Probabilistic Inference with Tabling and Answer subsumption” (PITA) that computes the probability of queries by transforming a probabilistic program into a normal program and then applying SLG resolution with answer subsumption. PITA has been implemented in XSB and tested on six domains: two with function symbols and four without. The execution times are compared with those of ProbLog, cplint and CVE. PITA was almost always able to solve larger problems in a shorter time, on domains with and without function symbols.",
"",
"One of the most important challenges in partial evaluation is the design of automatic methods for ensuring the termination of the process. In this work, we introduce sufficient conditions for the strong (i.e., independent of a computation rule) termination and quasi-termination of logic programs which rely on the construction of size-change graphs. We then present a fast binding-time analysis that takes the output of the termination analysis and annotates logic programs so that partial evaluation terminates. In contrast to previous approaches, the new binding-time analysis is conceptually simpler and considerably faster, scaling to medium-sized or even large examples.",
"Abstract Locally stratified programs are a significant class of logic programs with negation for which both declarative and fixpoint semantics are well defined. Unfortunately in most cases recognizing local stratification is so hard a task that it has been conjectured that the problem is undecidable. Indeed, in this paper a formal proof of this undecidability is presented and rather general sufficient conditions for local stratification are introduced."
]
} |
1512.04097 | 2284989774 | It is widely acknowledged that function symbols are an important feature in answer set programming, as they make modeling easier, increase the expressive power, and allow us to deal with infinite domains. The main issue with their introduction is that the evaluation of a program might not terminate and checking whether it terminates or not is undecidable. To cope with this problem, several classes of logic programs have been proposed where the use of function symbols is restricted but the program evaluation termination is guaranteed. Despite the significant body of work in this area, current approaches do not include many simple practical programs whose evaluation terminates. In this paper, we present the novel classes of rule-bounded and cycle-bounded programs, which overcome different limitations of current approaches by performing a more global analysis of how terms are propagated from the body to the head of rules. Results on the correctness, the complexity, and the expressivity of the proposed approach are provided. | We conclude by mentioning that our work is also related to research done on termination of the chase procedure, where existential rules are considered @cite_39 @cite_0 @cite_48 ; a survey on this topic can be found in @cite_25 . Indeed, sufficient conditions ensuring termination of the bottom-up evaluation of logic programs can be directly applied to existential rules. Specifically, one can analyze the logic program obtained from the skolemization of existential rules, where existentially quantified variables are replaced with complex terms @cite_39 . In fact, the evaluation of such a program behaves as the semi-oblivious'' chase @cite_39 , whose termination guarantees the termination of the standard chase @cite_42 @cite_49 . | {
"cite_N": [
"@cite_48",
"@cite_42",
"@cite_39",
"@cite_0",
"@cite_49",
"@cite_25"
],
"mid": [
"649907208",
"1573720519",
"2013096722",
"2029399717",
"2261579031",
"1969833016"
],
"abstract": [
"The Chase is a fixpoint algorithm enforcing satisfaction of data dependencies in databases. Its execution involves the insertion of tuples with possible null values and the changing of null values which can be made equal to constants or other null values. Since the chase fixpoint evaluation could be non-terminating, in recent years the problem know as chase termination has been investigated. It consists in the detection of sufficient conditions, derived from the structural analysis of dependencies, guaranteeing that the chase fixpoint terminates independently from the database instance. Several criteria introducing sufficient conditions for chase termination have been recently proposed [9, 8, 13, 12]. The aim of this paper is to present more general criteria and techniques for chase termination. We first present extensions of the well-known stratification conditions and introduce a new criterion, called local stratification (LS), which generalizes both super-weak acyclicity and stratification-based criteria (including the class of constraints which are inductively restricted). Next the paper presents a rewriting algorithm, whose structure is similar to the one presented in [10]; the algorithm takes as input a set of tuple generating dependencies and produces as output an equivalent set of dependencies and a boolean value stating whether a sort of cyclicity has been detected. The output set, obtained by adorning the input set of constraints, allows us to perform a more accurate analysis of the structural properties of constraints and to further enlarge the class of tuple generating dependencies for which chase termination is guaranteed, whereas the checking of acyclicity allows us to introduce the class of acyclic constraints (AC), which generalizesLS and guarantees chase termination.",
"In my PhD thesis I study the termination problem of the chase algorithm, a central tool in various database problems such as the constraint implication problem, conjunctive query optimization, rewriting queries using views, data exchange, and data integration.",
"Data-Exchange is the problem of creating new databases according to a high-level specification called a schema-mapping while preserving the information encoded in a source database. This paper introduces a notion of generalized schema-mapping that enriches the standard schema-mappings (as defined by ) with more expressive power. It then proposes a more general and arguably more intuitive notion of semantics that rely on three criteria: Soundness, Completeness and Laconicity (non-redundancy and minimal size). These semantics are shown to coincide precisely with the notion of cores of universal solutions in the framework of Fagin, Kolaitis and Popa. It is also well-defined and of interest for larger classes of schema-mappings and more expressive source databases (with null-values and equality constraints). After an investigation of the key properties of generalized schema-mappings and their semantics, a criterion called Termination of the Oblivious Chase (TOC) is identified that ensures polynomial data-complexity. This criterion strictly generalizes the previously known criterion of Weak-Acyclicity. To prove the tractability of TOC schema-mappings, a new polynomial time algorithm is provided that, unlike the algorithm of Gottlob and Nash from which it is inspired, does not rely on the syntactic property of Weak-Acyclicity. As the problem of deciding whether a Schema-mapping satisfies the TOC criterion is only recursively enumerable, a more restrictive criterion called Super-weak Acylicity (SwA) is identified that can be decided in Polynomial-time while generalizing substantially the notion of Weak-Acyclicity.",
"Several database areas such as data exchange and integration share the problem of fixing database instance violations with respect to a set of constraints. The chase algorithm solves such violations by inserting tuples and setting the value of nulls. Unfortunately, the chase algorithm may not terminate and the problem of deciding whether the chase process terminates is undecidable. Recently there has been an increasing interest in the identification of sufficient structural properties of constraints which guarantee that the chase algorithm terminates [8, 10, 14, 15]. In this paper we propose an original technique which allows to improve current conditions detecting chase termination. Our proposal consists in rewriting the original set of constraints Σ into an 'equivalent' set Σα and verifying the structural properties for chase termination on Σα. The rewriting of constraints allows to recognize larger classes of constraints for which chase termination is guaranteed. In particular, we show that if Σ satisfies chase termination conditions T, then the rewritten set Σα satisfies T as well, but the vice versa is not true, that is there are significant classes of constraints for which Σα satisfies T and Σ does not.",
"The initial and basic role of the chase procedure was to test logical implication between sets of dependencies in order to determine equivalence of database instances known to satisfy a given set of dependencies and to determine query equivalence under database constrains. Recently the chase procedure has experienced a revival due to its application in data exchange. In this chapter we review the chase algorithm and its properties as well as its application in data exchange.",
"The chase has long been used as a central tool to analyze dependencies and their effect on queries. It has been applied to different relevant problems in database theory such as query optimization, query containment and equivalence, dependency implication, and database schema design. Recent years have seen a renewed interest in the chase as an important tool in several database applications, such as data exchange and integration, query answering in incomplete data, and many others. It is well known that the chase algorithm might be non-terminating and thus, in order for it to find practical applicability, it is crucial to identify cases where its termination is guaranteed. Another important aspect to consider when dealing with the chase is that it can introduce null values into the database, thereby leading to incomplete data. Thus, in several scenarios where the chase is used the problem of dealing with data dependencies and incomplete data arises. This book discusses fundamental issues concerning data dependencies and incomplete data with a particular focus on the chase and its applications in different database areas. We report recent results about the crucial issue of identifying conditions that guarantee the chase termination. Different database applications where the chase is a central tool are discussed with particular attention devoted to query answering in the presence of data dependencies and database schema design. Table of Contents: Introduction Relational Databases Incomplete Databases The Chase Algorithm Chase Termination Data Dependencies and Normal Forms Universal Repairs Chase and Database Applications"
]
} |
1512.04114 | 2209725382 | (Withdrawn) Collaborative security initiatives are increasingly often advocated to improve timeliness and effectiveness of threat mitigation. Among these, collaborative predictive blacklisting (CPB) aims to forecast attack sources based on alerts contributed by multiple organizations that might be targeted in similar ways. Alas, CPB proposals thus far have only focused on improving hit counts, but overlooked the impact of collaboration on false positives and false negatives. Moreover, sharing threat intelligence often prompts important privacy, confidentiality, and liability issues. In this paper, we first provide a comprehensive measurement analysis of two state-of-the-art CPB systems: one that uses a trusted central party to collect alerts [, Infocom'10] and a peer-to-peer one relying on controlled data sharing [, DIMVA'15], studying the impact of collaboration on both correct and incorrect predictions. Then, we present a novel privacy-friendly approach that significantly improves over previous work, achieving a better balance of true and false positive rates, while minimizing information disclosure. Finally, we present an extension that allows our system to scale to very large numbers of organizations. | Privacy In Collaborative Intrusion Detection. Porras and Shmatikov @cite_1 discuss privacy risks prompted by sharing security-related data and propose anonymization and sanitization techniques to address them. However, follow-up work @cite_7 @cite_35 demonstrates that these techniques make data less useful and anyway prone to de-anonymization. | {
"cite_N": [
"@cite_35",
"@cite_1",
"@cite_7"
],
"mid": [
"1968860775",
"2083961700",
"1805704774"
],
"abstract": [
"To intelligently create policies governing the anonymization of network logs, one must analyze the effects of anonymization on both the security and utility of sanitized data. In this paper, we focus on analyzing the utility of network traces post-anonymization. Any measure of utility is subjective to the type of analysis being performed. This work focuses on utility for the task of attack detection since attack detection is an important part of an incident responders daily responsibilities. We employ a methodology we developed that analyzes the effect of anonymization on Intrusion Detection Systems (IDS), and we provide the first rigorous analysis of single field anonymization on IDS effectiveness. Through this work we can begin to answer the questions of whether the field affects anonymization more than the algorithm; which fields have a larger impact on utility; and which anonymization algorithms have a larger impact on utility.",
"Over the last several years, there has been an emerging interest in the development of wide-area data collection and analysis centers to help identify, track, and formulate responses to the ever-growing number of coordinated attacks and malware infections that plague computer networks worldwide. As large-scale network threats continue to evolve in sophistication and extend to widely deployed applications, we expect that interest in collaborative security monitoring infrastructures will continue to grow, because such attacks may not be easily diagnosed from a single point in the network. The intent of this position paper is not to argue the necessity of Internet-scale security data sharing infrastructures, as there is ample research [13, 48, 51, 54, 41, 47, 42] and operational examples [43, 17, 32, 53] that already make this case. Instead, we observe that these well-intended activities raise a unique set of risks and challenges. We outline some of the most salient issues faced by global network security centers, survey proposed defense mechanisms, and pose several research challenges to the computer security community. We hope that this position paper will serve as a stimulus to spur groundbreaking new research in protection and analysis technologies that can facilitate the collaborative sharing of network security data while keeping data contributors safe and secure.",
"Encouraging the release of network data is central to promoting sound network research practices, though the publication of this data can leak sensitive information about the publishing organization. To address this dilemma, several techniques have been suggested for anonymizing network data by obfuscating sensitive fields. In this paper, we present new techniques for inferring network topology and deanonymizing servers present in anonymized network data, using only the data itself and public information. Via analyses on three different network datasets, we quantify the effectiveness of our techniques, showing that they can uncover significant amounts of sensitive information. We also discuss prospects for preventing these deanonymization attacks."
]
} |
1512.04114 | 2209725382 | (Withdrawn) Collaborative security initiatives are increasingly often advocated to improve timeliness and effectiveness of threat mitigation. Among these, collaborative predictive blacklisting (CPB) aims to forecast attack sources based on alerts contributed by multiple organizations that might be targeted in similar ways. Alas, CPB proposals thus far have only focused on improving hit counts, but overlooked the impact of collaboration on false positives and false negatives. Moreover, sharing threat intelligence often prompts important privacy, confidentiality, and liability issues. In this paper, we first provide a comprehensive measurement analysis of two state-of-the-art CPB systems: one that uses a trusted central party to collect alerts [, Infocom'10] and a peer-to-peer one relying on controlled data sharing [, DIMVA'15], studying the impact of collaboration on both correct and incorrect predictions. Then, we present a novel privacy-friendly approach that significantly improves over previous work, achieving a better balance of true and false positive rates, while minimizing information disclosure. Finally, we present an extension that allows our system to scale to very large numbers of organizations. | @cite_8 introduce a few privacy-preserving protocols based on secure multiparty computation (MPC) for aggregation of network statistics. This is also explored in @cite_0 , where entities send encrypted data to a central repository that aggregates contributions. However, statistics only identify the most prolific attack sources and yield global models, which, as discussed in @cite_32 , miss a significant number of attacks and yield poor prediction performance. @cite_16 introduce an inference algorithm, BotGrep, to privately discover botnet hosts and links in network traffics, relying on Private Set Intersection @cite_15 . @cite_10 propose a game-theoretic model for software vulnerability sharing between two competing parties. Their protocol relies on a private set operation (PSO) technique to limit the amount of information disclosed. However, it does not scale for more than two entities. Finally, @cite_19 focus on collaborative predictive blacklisting based on a pairwise controlled data sharing approach. They focus on identifying which metrics (e.g., number of common attacks) can be used to privately estimate the benefits of collaboration between two organizations, rather than proposing a deployable system. In fact, as discussed later, their pairwise approach does not scale to many organizations. | {
"cite_N": [
"@cite_8",
"@cite_32",
"@cite_0",
"@cite_19",
"@cite_15",
"@cite_16",
"@cite_10"
],
"mid": [
"25045116",
"2245533262",
"1597292924",
"2952479443",
"1485216661",
"1594972289",
""
],
"abstract": [
"Secure multiparty computation (MPC) allows joint privacy-preserving computations on data of multiple parties. Although MPC has been studied substantially, building solutions that are practical in terms of computation and communication cost is still a major challenge. In this paper, we investigate the practical usefulness of MPC for multi-domain network security and monitoring. We first optimize MPC comparison operations for processing high volume data in near real-time. We then design privacy-preserving protocols for event correlation and aggregation of network traffic statistics, such as addition of volume metrics, computation of feature entropy, and distinct item count. Optimizing performance of parallel invocations, we implement our protocols along with a complete set of basic operations in a library called SEPIA. We evaluate the running time and bandwidth requirements of our protocols in realistic settings on a local cluster as well as on PlanetLab and show that they work in near real-time for up to 140 input providers and 9 computation nodes. Compared to implementations using existing general-purpose MPC frameworks, our protocols are significantly faster, requiring, for example, 3 minutes for a task that takes 2 days with general-purpose frameworks. This improvement paves the way for new applications of MPC in the area of networking. Finally, we run SEPIA's protocols on real traffic traces of 17 networks and show how they provide new possibilities for distributed troubleshooting and early anomaly detection.",
"We introduce the highly predictive Blacklist (HPB) service, which is now integrated into DShield.org portal [1] The HPB service employs a radically different approach to blacklist formulation than that a contemporary blacklist formulation strategies. At the core of the system is a ranking scheme that measures how closely related an attack source is to a blacklist consumer, based on both the attacker's history and the most recent firewall log production pattern of the consumer. Our objective is to construct a customized blacklist per repository contributor that reflects the most probable set of adresses that may attack the contributor in the near future. We view this service as a first experimental step toward a new direction in high-quality blacklist generation.",
"Combining and analyzing data collected at multiple administrative locations is critical for a wide variety of applications, such as detecting malicious attacks or computing an accurate estimate of the popularity of Web sites. However, legitimate concerns about privacy often inhibit participation in collaborative data aggregation. In this paper, we design, implement, and evaluate a practical solution for privacy-preserving data aggregation (PDA) among a large number of participants. Scalability and efficiency is achieved through a \"semi-centralized\" architecture that divides responsibility between a proxy that obliviously blinds the client inputs and a database that aggregates values by (blinded) keywords and identifies those keywords whose values satisfy some evaluation function. Our solution leverages a novel cryptographic protocol that provably protects the privacy of both the participants and the keywords, provided that proxy and database do not collude, even if both parties may be individually malicious. Our prototype implementation can handle over a million suspect IP addresses per hour when deployed across only two quad-core servers, and its throughput scales linearly with additional computational resources.",
"Although sharing data across organizations is often advocated as a promising way to enhance cybersecurity, collaborative initiatives are rarely put into practice owing to confidentiality, trust, and liability challenges. In this paper, we investigate whether collaborative threat mitigation can be realized via a controlled data sharing approach, whereby organizations make informed decisions as to whether or not, and how much, to share. Using appropriate cryptographic tools, entities can estimate the benefits of collaboration and agree on what to share in a privacy-preserving way, without having to disclose their datasets. We focus on collaborative predictive blacklisting, i.e., forecasting attack sources based on one's logs and those contributed by other organizations. We study the impact of different sharing strategies by experimenting on a real-world dataset of two billion suspicious IP addresses collected from Dshield over two months. We find that controlled data sharing yields up to 105 accuracy improvement on average, while also reducing the false positive rate.",
"The constantly increasing dependence on anytime-anywhere availability of data and the commensurately increasing fear of losing privacy motivate the need for privacy-preserving techniques. One interesting and common problem occurs when two parties need to privately compute an intersection of their respective sets of data. In doing so, one or both parties must obtain the intersection (if one exists), while neither should learn anything about other set elements. Although prior work has yielded a number of effective and elegant Private Set Intersection (PSI) techniques, the quest for efficiency is still underway. This paper explores some PSI variations and constructs several secure protocols that are appreciably more efficient than the state-of-the-art.",
"A key feature that distinguishes modern botnets from earlier counterparts is their increasing use of structured overlay topologies. This lets them carry out sophisticated coordinated activities while being resilient to churn, but it can also be used as a point of detection. In this work, we devise techniques to localize botnet members based on the unique communication patterns arising from their overlay topologies used for command and control. Experimental results on synthetic topologies embedded within Internet traffic traces from an ISP's backbone network indicate that our techniques (i) can localize the majority of bots with low false positive rate, and (ii) are resilient to incomplete visibility arising from partial deployment of monitoring systems and measurement inaccuracies from dynamics of background traffic.",
""
]
} |
1512.03953 | 2950154716 | k-medoids algorithm is a partitional, centroid-based clustering algorithm which uses pairwise distances of data points and tries to directly decompose the dataset with @math points into a set of @math disjoint clusters. However, k-medoids itself requires all distances between data points that are not so easy to get in many applications. In this paper, we introduce a new method which requires only a small proportion of the whole set of distances and makes an effort to estimate an upper-bound for unknown distances using the inquired ones. This algorithm makes use of the triangle inequality to calculate an upper-bound estimation of the unknown distances. Our method is built upon a recursive approach to cluster objects and to choose some points actively from each bunch of data and acquire the distances between these prominent points from oracle. Experimental results show that the proposed method using only a small subset of the distances can find proper clustering on many real-world and synthetic datasets. | The existing active clustering methods can be categorized into constraint-based and distance-based ones @cite_23 . In the most of the constraint-based methods, must-link and cannot-link constraints on pairs of data points indicating these pairs must be in the same cluster or different clusters are inquired. Some constraint-based methods for active clustering have been proposed in @cite_23 @cite_4 @cite_9 @cite_14 @cite_17 @cite_7 @cite_16 . In distance-based methods, the response to a query on a pair of data points is the distance of that pair according to an objective function. Distance-based methods for active clustering have been recently attended in @cite_18 @cite_24 @cite_5 @cite_26 @cite_3 @cite_22 . | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_26",
"@cite_7",
"@cite_22",
"@cite_9",
"@cite_3",
"@cite_24",
"@cite_23",
"@cite_5",
"@cite_16",
"@cite_17"
],
"mid": [
"",
"2134342006",
"1595562640",
"813010328",
"2153839362",
"2097979671",
"",
"2131365923",
"",
"1983619063",
"2951660437",
"2134089414",
"2100346396"
],
"abstract": [
"",
"The technique of spectral clustering is widely used to segment a range of data from graphs to images. Our work marks a natural progression of spectral clustering from the original passive unsupervised formulation to our active semi-supervised formulation. We follow the widely used area of constrained clustering and allow supervision in the form of pair wise relations between two nodes: Must-Link and Cannot-Link. Unlike most previous constrained clustering work, our constraints are specified incrementally by querying an oracle (domain expert). Since in practice, each query comes with a cost, our goal is to maximally improve the result with as few queries as possible. The advantages of our approach include: 1) it is principled by querying the constraints which maximally reduce the expected error, 2) it can incorporate both hard and soft constraints which are prevalent in practice. We empirically show that our method significantly outperforms the baseline approach, namely constrained spectral clustering with randomly selected constraints, on UCI benchmark data sets.",
"Semi-supervised clustering seeks to augment traditional clustering methods by incorporating side information provided via human expertise in order to increase the semantic meaningfulness of the resulting clusters. However, most current methods are in the sense that the side information is provided beforehand and selected randomly. This may require a large number of constraints, some of which could be redundant, unnecessary, or even detrimental to the clustering results. Thus in order to scale such semi-supervised algorithms to larger problems it is desirable to pursue an clustering method---i.e. an algorithm that maximizes the effectiveness of the available human labor by only requesting human input where it will have the greatest impact. Here, we propose a novel online framework for active semi-supervised spectral clustering that selects pairwise constraints as clustering proceeds, based on the principle of uncertainty reduction. Using a first-order Taylor expansion, we decompose the expected uncertainty reduction problem into a gradient and a step-scale, computed via an application of matrix perturbation theory and cluster-assignment entropy, respectively. The resulting model is used to estimate the uncertainty reduction potential of each sample in the dataset. We then present the human user with pairwise queries with respect to only the best candidate sample. We evaluate our method using three different image datasets (faces, leaves and dogs), a set of common UCI machine learning datasets and a gene dataset. The results validate our decomposition formulation and show that our method is consistently superior to existing state-of-the-art techniques, as well as being robust to noise and to unknown numbers of clusters.",
"Spectral clustering is a modern and well known method for performing data clustering. However, it depends on the availability of a similarity matrix, which in many applications can be non-trivial to obtain. In this paper, we focus on the problem of performing spectral clustering under a budget constraint, where there is a limit on the number of entries which can be queried from the similarity matrix. We propose two algorithms for this problem, and study them theoretically and experimentally. These algorithms allow a tradeo between computational eciency and actual performance, and are also relevant for the problem of speeding up standard spectral clustering.",
"Semi-supervised clustering uses a small amount of supervised data to aid unsupervised learning. One typical approach specifies a limited number of must-link and cannotlink constraints between pairs of examples. This paper presents a pairwise constrained clustering framework and a new method for actively selecting informative pairwise constraints to get improved clustering performance. The clustering and active learning methods are both easily scalable to large datasets, and can handle very high dimensional data. Experimental and theoretical results confirm that this active querying of pairwise constraints significantly improves the accuracy of clustering when given a relatively small amount of supervision.",
"Given a point set S and an unknown metric d on S, we study the problem of efficiently partitioning S into k clusters while querying few distances between the points. In our model we assume that we have access to one versus all queries that given a point s ∈ S return the distances between s and all other points. We show that given a natural assumption about the structure of the instance, we can efficiently find an accurate clustering using only O(k) distance queries. Our algorithm uses an active selection strategy to choose a small set of points that we call landmarks, and considers only the distances between landmarks and other points to produce a clustering. We use our procedure to cluster proteins by sequence similarity. This setting nicely fits our model because we can use a fast sequence database search program to query a sequence against an entire data set. We conduct an empirical study that shows that even though we query a small fraction of the distances between the points, we produce clusterings that are close to a desired clustering given by manual classification.",
"",
"Spectral clustering is a widely used method for organizing data that only relies on pairwise similarity measurements. This makes its application to non-vectorial data straight-forward in principle, as long as all pairwise similarities are available. However, in recent years, numerous examples have emerged in which the cost of assessing similarities is substantial or prohibitive. We propose an active learning algorithm for spectral clustering that incrementally measures only those similarities that are most likely to remove uncertainty in an intermediate clustering solution. In many applications, similarities are not only costly to compute, but also noisy. We extend our algorithm to maintain running estimates of the true similarities, as well as estimates of their accuracy. Using this information, the algorithm updates only those estimates which are relatively inaccurate and whose update would most likely remove clustering uncertainty. We compare our methods on several datasets, including a realistic example where similarities are expensive and noisy. The results show a significant improvement in performance compared to the alternatives.",
"",
"In this article, we address the problem of automatic constraint selection to improve the performance of constraint-based clustering algorithms. To this aim we propose a novel active learning algorithm that relies on a k-nearest neighbors graph and a new constraint utility function to generate queries to the human expert. This mechanism is paired with propagation and refinement processes that limit the number of constraint candidates and introduce a minimal diversity in the proposed constraints. Existing constraint selection heuristics are based on a random selection or on a min-max criterion and thus are either inefficient or more adapted to spherical clusters. Contrary to these approaches, our method is designed to be beneficial for all constraint-based clustering algorithms. Comparative experiments conducted on real datasets and with two distinct representative constraint-based clustering algorithms show that our approach significantly improves clustering quality while minimizing the number of human expert solicitations.",
"Advances in sensing technologies and the growth of the internet have resulted in an explosion in the size of modern datasets, while storage and processing power continue to lag behind. This motivates the need for algorithms that are efficient, both in terms of the number of measurements needed and running time. To combat the challenges associated with large datasets, we propose a general framework for active hierarchical clustering that repeatedly runs an off-the-shelf clustering algorithm on small subsets of the data and comes with guarantees on performance, measurement complexity and runtime complexity. We instantiate this framework with a simple spectral clustering algorithm and provide concrete results on its performance, showing that, under some assumptions, this algorithm recovers all clusters of size ?(log n) using O(n log^2 n) similarities and runs in O(n log^3 n) time for a dataset of n objects. Through extensive experimentation we also demonstrate that this framework is practically alluring.",
"Clustering is traditionally viewed as an unsupervised method for data analysis. However, in some cases information about the problem domain is available in addition to the data instances themselves. In this paper, we demonstrate how the popular k-means clustering algorithm can be protably modied to make use of this information. In experiments with articial constraints on six data sets, we observe improvements in clustering accuracy. We also apply this method to the real-world problem of automatically detecting road lanes from GPS data and observe dramatic increases in performance.",
"We propose a method of clustering images that combines algorithmic and human input. An algorithm provides us with pairwise image similarities. We then actively obtain selected, more accurate pairwise similarities from humans. A novel method is developed to choose the most useful pairs to show a person, obtaining constraints that improve clustering. In a clustering assignment, elements in each data pair are either in the same cluster or in different clusters. We simulate inverting these pairwise relations and see how that affects the overall clustering. We choose a pair that maximizes the expected change in the clustering. The proposed algorithm has high time complexity, so we also propose a version of this algorithm that is much faster and exactly replicates our original algorithm. We further improve run-time by adding two heuristics, and show that these do not significantly impact the effectiveness of our method. We have run experiments in three different domains, namely leaf, face and scene images, and show that the proposed method improves clustering performance significantly."
]
} |
1512.03953 | 2950154716 | k-medoids algorithm is a partitional, centroid-based clustering algorithm which uses pairwise distances of data points and tries to directly decompose the dataset with @math points into a set of @math disjoint clusters. However, k-medoids itself requires all distances between data points that are not so easy to get in many applications. In this paper, we introduce a new method which requires only a small proportion of the whole set of distances and makes an effort to estimate an upper-bound for unknown distances using the inquired ones. This algorithm makes use of the triangle inequality to calculate an upper-bound estimation of the unknown distances. Our method is built upon a recursive approach to cluster objects and to choose some points actively from each bunch of data and acquire the distances between these prominent points from oracle. Experimental results show that the proposed method using only a small subset of the distances can find proper clustering on many real-world and synthetic datasets. | In @cite_26 @cite_3 , distance-based algorithms are presented for active spectral clustering in which a perturbation theory approach is used to select queries. A constraint-based algorithm has also been presented in @cite_14 for active spectral clustering that uses an approach based on maximum expected error reduction to select queries. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_3"
],
"mid": [
"2134342006",
"813010328",
"2131365923"
],
"abstract": [
"The technique of spectral clustering is widely used to segment a range of data from graphs to images. Our work marks a natural progression of spectral clustering from the original passive unsupervised formulation to our active semi-supervised formulation. We follow the widely used area of constrained clustering and allow supervision in the form of pair wise relations between two nodes: Must-Link and Cannot-Link. Unlike most previous constrained clustering work, our constraints are specified incrementally by querying an oracle (domain expert). Since in practice, each query comes with a cost, our goal is to maximally improve the result with as few queries as possible. The advantages of our approach include: 1) it is principled by querying the constraints which maximally reduce the expected error, 2) it can incorporate both hard and soft constraints which are prevalent in practice. We empirically show that our method significantly outperforms the baseline approach, namely constrained spectral clustering with randomly selected constraints, on UCI benchmark data sets.",
"Spectral clustering is a modern and well known method for performing data clustering. However, it depends on the availability of a similarity matrix, which in many applications can be non-trivial to obtain. In this paper, we focus on the problem of performing spectral clustering under a budget constraint, where there is a limit on the number of entries which can be queried from the similarity matrix. We propose two algorithms for this problem, and study them theoretically and experimentally. These algorithms allow a tradeo between computational eciency and actual performance, and are also relevant for the problem of speeding up standard spectral clustering.",
"Spectral clustering is a widely used method for organizing data that only relies on pairwise similarity measurements. This makes its application to non-vectorial data straight-forward in principle, as long as all pairwise similarities are available. However, in recent years, numerous examples have emerged in which the cost of assessing similarities is substantial or prohibitive. We propose an active learning algorithm for spectral clustering that incrementally measures only those similarities that are most likely to remove uncertainty in an intermediate clustering solution. In many applications, similarities are not only costly to compute, but also noisy. We extend our algorithm to maintain running estimates of the true similarities, as well as estimates of their accuracy. Using this information, the algorithm updates only those estimates which are relatively inaccurate and whose update would most likely remove clustering uncertainty. We compare our methods on several datasets, including a realistic example where similarities are expensive and noisy. The results show a significant improvement in performance compared to the alternatives."
]
} |
1512.03953 | 2950154716 | k-medoids algorithm is a partitional, centroid-based clustering algorithm which uses pairwise distances of data points and tries to directly decompose the dataset with @math points into a set of @math disjoint clusters. However, k-medoids itself requires all distances between data points that are not so easy to get in many applications. In this paper, we introduce a new method which requires only a small proportion of the whole set of distances and makes an effort to estimate an upper-bound for unknown distances using the inquired ones. This algorithm makes use of the triangle inequality to calculate an upper-bound estimation of the unknown distances. Our method is built upon a recursive approach to cluster objects and to choose some points actively from each bunch of data and acquire the distances between these prominent points from oracle. Experimental results show that the proposed method using only a small subset of the distances can find proper clustering on many real-world and synthetic datasets. | An active clustering method for k-median clustering has also been proposed in @cite_22 . This method selects some points as the landmarks and ask the distances between these landmarks and all the other data points as queries. Finally, k-median clustering is done using these distances. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2097979671"
],
"abstract": [
"Given a point set S and an unknown metric d on S, we study the problem of efficiently partitioning S into k clusters while querying few distances between the points. In our model we assume that we have access to one versus all queries that given a point s ∈ S return the distances between s and all other points. We show that given a natural assumption about the structure of the instance, we can efficiently find an accurate clustering using only O(k) distance queries. Our algorithm uses an active selection strategy to choose a small set of points that we call landmarks, and considers only the distances between landmarks and other points to produce a clustering. We use our procedure to cluster proteins by sequence similarity. This setting nicely fits our model because we can use a fast sequence database search program to query a sequence against an entire data set. We conduct an empirical study that shows that even though we query a small fraction of the distances between the points, we produce clusterings that are close to a desired clustering given by manual classification."
]
} |
1512.04017 | 2952572865 | Logit-response dynamics (Alos-Ferrer and Netzer, Games and Economic Behavior 2010) are a rich and natural class of noisy best-response dynamics. In this work we revise the price of anarchy and the price of stability by considering the quality of long-run equilibria in these dynamics. Our results show that prior studies on simpler dynamics of this type can strongly depend on a synchronous schedule of the players' moves. In particular, a small noise by itself is not enough to improve the quality of equilibria as soon as other very natural schedules are used. | Stochastic stability is the tool to show that in the coordination game players select the risk dominant strategy (see @cite_11 for discussion and references). Recent works have pointed out that in general revision processes (including independent-learning) logit-response dynamics do converge to Nash equilibria in potential games ( discuss this issue explicitly). | {
"cite_N": [
"@cite_11"
],
"mid": [
"2172106845"
],
"abstract": [
"We develop a characterization of stochastically stable states for the logit-response learning dynamics in games, with arbitrary specification of revision opportunities. The result allows us to show convergence to the set of Nash equilibria in the class of best-response potential games and the failure of the dynamics to select potential maximizers beyond the class of exact potential games. We also study to which extent equilibrium selection is robust to the specification of revision opportunities. Our techniques can be extended and applied to a wide class of learning dynamics in games."
]
} |
1512.03423 | 2270557434 | Modern smart phones are becoming helpful in the areas of Internet-Of-Things (IoT) and ambient health intelligence. By learning data from several mobile sensors, we detect nearness of the human body to a mobile device in a three-dimensional space with no physical contact with the device for non-invasive health diagnostics. We show that the human body generates wave patterns that interact with other naturally occurring ambient signals that could be measured by mobile sensors, such as, temperature, humidity, magnetic field, acceleration, gravity, and light. This interaction consequentially alters the patterns of the naturally occurring signals, and thus, exhibits characteristics that could be learned to predict the nearness of the human body to a mobile device, hence provide diagnostic information for medical practitioners. Our prediction technique achieved 88.75 accuracy and 88.3 specificity. | Most related works have performed activity recognition for healthy lifestyle and fitness using the acceleration sensor @cite_19 @cite_8 . For example, @cite_19 performed activity recognition using the acceleration sensor on Android based mobile device. Several activities including walking, jogging, standing, sitting, and ascending or descending stairs were predicted having learned a set of transformed features. Acceleration data was transformed into 43 learn-able features. Multilayer perceptron has the best predictive accuracy of 91.7 More recently, proximity detection has been introduced for creating several non-invasive health diagnostic tool for the health industry using Body Area Network (BAN) techniques @cite_15 . Our work falls in this category, and more importantly, novel in the sense that we do not use only one sensor point or multiple devices in our prediction. We combined multiple sensors' data on one mobile device alone to improve the proximity prediction accuracy. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_8"
],
"mid": [
"2017634428",
"1973675857",
"1575873342"
],
"abstract": [
"Mobile devices are becoming increasingly sophisticated and the latest generation of smart cell phones now incorporates many diverse and powerful sensors. These sensors include GPS sensors, vision sensors (i.e., cameras), audio sensors (i.e., microphones), light sensors, temperature sensors, direction sensors (i.e., magnetic compasses), and acceleration sensors (i.e., accelerometers). The availability of these sensors in mass-marketed communication devices creates exciting new opportunities for data mining and data mining applications. In this paper we describe and evaluate a system that uses phone-based accelerometers to perform activity recognition, a task which involves identifying the physical activity a user is performing. To implement our system we collected labeled accelerometer data from twenty-nine users as they performed daily activities such as walking, jogging, climbing stairs, sitting, and standing, and then aggregated this time series data into examples that summarize the user activity over 10- second intervals. We then used the resulting training data to induce a predictive model for activity recognition. This work is significant because the activity recognition model permits us to gain useful knowledge about the habits of millions of users passively---just by having them carry cell phones in their pockets. Our work has a wide range of applications, including automatic customization of the mobile device's behavior based upon a user's activity (e.g., sending calls directly to voicemail if a user is jogging) and generating a daily weekly activity profile to determine if a user (perhaps an obese child) is performing a healthy amount of exercise.",
"Advances in wireless communication technologies, such as wearable and implantable biosensors, along with recent developments in the embedded computing area are enabling the design, development, and implementation of body area networks. This class of networks is paving the way for the deployment of innovative healthcare monitoring applications. In the past few years, much of the research in the area of body area networks has focused on issues related to wireless sensor designs, sensor miniaturization, low-power sensor circuitry, signal processing, and communications protocols. In this paper, we present an overview of body area networks, and a discussion of BAN communications types and their related issues. We provide a detailed investigation of sensor devices, physical layer, data link layer, and radio technology aspects of BAN research. We also present a taxonomy of BAN projects that have been introduced proposed to date. Finally, we highlight some of the design challenges and open issues that still need to be addressed to make BANs truly ubiquitous for a wide range of applications.",
"Real-time monitoring of human movements can be easily envisaged as a useful tool for many purposes and future applications. This paper presents the implementation of a real-time classification system for some basic human movements using a conventional mobile phone equipped with an accelerometer. The aim of this study was to check the present capacity of conventional mobile phones to execute in real-time all the necessary pattern recognition algorithms to classify the corresponding human movements. No server processing data is involved in this approach, so the human monitoring is completely decentralized and only an additional software will be required to remotely report the human monitoring. The feasibility of this approach opens a new range of opportunities to develop new applications at a reasonable low-cost."
]
} |
1512.03440 | 2294775638 | This paper, by comparing three potential energy trading systems, studies the feasibility of integrating a community energy storage (CES) device with consumer-owned photovoltaic (PV) systems for demand-side management of a residential neighborhood area network. We consider a fully-competitive CES operator in a non-cooperative Stackelberg game, a benevolent CES operator that has socially favorable regulations with competitive users, and a centralized cooperative CES operator that minimizes the total community energy cost. The former two game-theoretic systems consider that the CES operator first maximizes their revenue by setting a price signal and trading energy with the grid. Then the users with PV panels play a non-cooperative repeated game following the actions of the CES operator to trade energy with the CES device and the grid to minimize energy costs. The centralized CES operator cooperates with the users to minimize the total community energy cost without appropriate incentives. The non-cooperative Stackelberg game with the fully-competitive CES operator has a unique Stackelberg equilibrium at which the CES operator maximizes revenue and users obtain unique Pareto-optimal Nash equilibrium CES energy trading strategies. Extensive simulations show that the fully-competitive CES model gives the best trade-off of operating environment between the CES operator and the users. | There is a rich literature on demand-side management that exploits user demand flexibility to achieve economic power system improvements. For example, dynamic pricing for consumption scheduling @cite_13 , load shifting methods @cite_20 @cite_28 @cite_11 , and incentive-based demand response programs @cite_6 @cite_35 have been investigated. We study demand-side management with a CES device to utilize household-distributed PV power generation without modifying users' energy demands. | {
"cite_N": [
"@cite_35",
"@cite_28",
"@cite_6",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"2001386890",
"2148933945",
"2071361493",
"1984560471",
"2149699660",
"2068060907"
],
"abstract": [
"In this paper, we study Demand Response (DR) problematics for different levels of information sharing in a smart grid. We propose a dynamic pricing scheme incentivizing consumers to achieve an aggregate load profile suitable for utilities, and study how close they can get to an ideal flat profile depending on how much information they share. When customers can share all their load profiles, we provide a distributed algorithm, set up as a cooperative game between consumers, which significantly reduces the total cost and peak-to-average ratio (PAR) of the system. In the absence of full information sharing (for reasons of privacy), when users have only access to the instantaneous total load on the grid, we provide distributed stochastic strategies that successfully exploit this information to improve the overall load profile. Simulation results confirm that these solutions efficiently benefit from information sharing within the grid and reduce both the total cost and PAR.",
"Outdoor temperature, thermal comfort level of consumers and payback load effect constrain direct load control (DLC) of air-conditioning loads and limit the DLC schedule. Since the constraints in direct air-conditioning load control are characteristics of air-conditioning loads, this work presents a novel group-DLC program with a least enthalpy estimation (LEE)-based thermal comfort controller for air-conditioning systems to control air-conditioning systems and eliminate DLC problems simultaneously. The g-DLC controller is the threshold for problems between air-conditioning units and the load management program, and arranges the DLC schedule for all air-conditioning units. The LEE-based thermal comfort controller can maintain the thermal comfort level within a reasonable range and prolong off-shift time of the DLC program, thereby increasing shedding load. It can also mitigate the impact of outdoor temperature and prevent payback load effect. Hence, DLC constraints on air-conditioning loads are mitigated.",
"Considerable developments in the real-time telemetry of demand-side systems allow independent system operators (ISOs) to use reserves provided by demand response (DR) in ancillary service markets. Currently, many ISOs have designed programs to utilize the reserve provided by DR in electricity markets. This paper presents a stochastic model to schedule reserves provided by DR in the wholesale electricity markets. Demand-side reserve is supplied by demand response providers (DRPs), which have the responsibility of aggregating and managing customer responses. A mixed-integer representation of reserve provided by DRPs and its associated cost function are used in the proposed stochastic model. The proposed stochastic model is formulated as a two-stage stochastic mixed-integer programming (SMIP) problem. The first-stage involves network-constrained unit commitment in the base case and the second-stage investigates security assurance in system scenarios. The proposed model would schedule reserves provided by DRPs and determine commitment states of generating units and their scheduled energy and spinning reserves in the scheduling horizon. The proposed approach is applied to two test systems to illustrate the benefits of implementing demand-side reserve in electricity markets.",
"Real-time electricity pricing models can potentially lead to economic and environmental advantages compared to the current common flat rates. In particular, they can provide end users with the opportunity to reduce their electricity expenditures by responding to pricing that varies with different times of the day. However, recent studies have revealed that the lack of knowledge among users about how to respond to time-varying prices as well as the lack of effective building automation systems are two major barriers for fully utilizing the potential benefits of real-time pricing tariffs. We tackle these problems by proposing an optimal and automatic residential energy consumption scheduling framework which attempts to achieve a desired trade-off between minimizing the electricity payment and minimizing the waiting time for the operation of each appliance in household in presence of a real-time pricing tariff combined with inclining block rates. Our design requires minimum effort from the users and is based on simple linear programming computations. Moreover, we argue that any residential load control strategy in real-time electricity pricing environments requires price prediction capabilities. This is particularly true if the utility companies provide price information only one or two hours ahead of time. By applying a simple and efficient weighted average price prediction filter to the actual hourly-based price values used by the Illinois Power Company from January 2007 to December 2009, we obtain the optimal choices of the coefficients for each day of the week to be used by the price predictor filter. Simulation results show that the combination of the proposed energy consumption scheduling design and the price predictor filter leads to significant reduction not only in users' payments but also in the resulting peak-to-average ratio in load demand for various load scenarios. Therefore, the deployment of the proposed optimal energy consumption scheduling schemes is beneficial for both end users and utility companies.",
"Electrolytic process, employed for manufacturing basic chemicals like caustic soda and chlorine, is highly energy intensive. Due to escalating costs of fossil fuels and capacity addition, the electricity cost has been increasing for the last few decades. Electricity intensive industries find it very difficult to cope up with higher electricity charges particularly with time-of-use (TOU) tariffs implemented by the utilities with the objective of flattening the load curve. Load management programs focusing on reduced electricity use at the time of utility's peak demand, by strategic load shifting, is a viable option for industries to reduce their electricity cost. This paper presents an optimization model and formulation for load management for electrolytic process industries. The formulation utilizes mixed integer nonlinear programming (MINLP) technique for minimizing the electricity cost and reducing the peak demand, by rescheduling the loads, satisfying the industry constraints. The case study of a typical caustic-chlorine plant shows that a reduction of about 19 in the peak demand with a corresponding saving of about 3.9 in the electricity cost is possible with the optimal load scheduling under TOU tariff.",
"Most of the existing demand-side management programs focus primarily on the interactions between a utility company and its customers users. In this paper, we present an autonomous and distributed demand-side energy management system among users that takes advantage of a two-way digital communication infrastructure which is envisioned in the future smart grid. We use game theory and formulate an energy consumption scheduling game, where the players are the users and their strategies are the daily schedules of their household appliances and loads. It is assumed that the utility company can adopt adequate pricing tariffs that differentiate the energy usage in time and level. We show that for a common scenario, with a single utility company serving multiple customers, the global optimal performance in terms of minimizing the energy costs is achieved at the Nash equilibrium of the formulated energy consumption scheduling game. The proposed distributed demand-side energy management strategy requires each user to simply apply its best response strategy to the current total load and tariffs in the power distribution system. The users can maintain privacy and do not need to reveal the details on their energy consumption schedules to other users. We also show that users will have the incentives to participate in the energy consumption scheduling game and subscribing to such services. Simulation results confirm that the proposed approach can reduce the peak-to-average ratio of the total energy demand, the total energy costs, as well as each user's individual daily electricity charges."
]
} |
1512.03440 | 2294775638 | This paper, by comparing three potential energy trading systems, studies the feasibility of integrating a community energy storage (CES) device with consumer-owned photovoltaic (PV) systems for demand-side management of a residential neighborhood area network. We consider a fully-competitive CES operator in a non-cooperative Stackelberg game, a benevolent CES operator that has socially favorable regulations with competitive users, and a centralized cooperative CES operator that minimizes the total community energy cost. The former two game-theoretic systems consider that the CES operator first maximizes their revenue by setting a price signal and trading energy with the grid. Then the users with PV panels play a non-cooperative repeated game following the actions of the CES operator to trade energy with the CES device and the grid to minimize energy costs. The centralized CES operator cooperates with the users to minimize the total community energy cost without appropriate incentives. The non-cooperative Stackelberg game with the fully-competitive CES operator has a unique Stackelberg equilibrium at which the CES operator maximizes revenue and users obtain unique Pareto-optimal Nash equilibrium CES energy trading strategies. Extensive simulations show that the fully-competitive CES model gives the best trade-off of operating environment between the CES operator and the users. | Prior works have examined centralized control of distributed power resources, such as renewable power sources and storage devices, for effective energy management @cite_25 @cite_33 . Decentralized control of energy resources has been proposed to increase system reliability and robustness @cite_24 . In particular, game theory has been applied to analyze interactions between distributed energy resources in power system @cite_0 @cite_12 . The authors in @cite_23 achieve cost-effective energy management through a non-cooperative game that schedules consumer-owned energy storage devices and appliances. In @cite_7 , the authors study a cooperative game-theoretic approach to achieve optimal load balancing using a CES device where users share the stored energy of the CES device to contribute towards community's overall demand-side management. In contrast, we investigate a non-cooperative hierarchical energy trading system between a CES device and users based on Stackelberg game theory. Moreover, compared to @cite_7 , the charging and discharging mechanism of the CES device in this paper employs user-owned PV energy generation to achieve demand-side management. | {
"cite_N": [
"@cite_33",
"@cite_7",
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_25",
"@cite_12"
],
"mid": [
"1981585916",
"2208121241",
"2159014131",
"2058626062",
"2015514082",
"2145397365",
""
],
"abstract": [
"This paper presents a new method based on the cost-benefit analysis for optimal sizing of an energy storage system in a microgrid (MG). The unit commitment problem with spinning reserve for MG is considered in this method. Time series and feed-forward neural network techniques are used for forecasting the wind speed and solar radiations respectively and the forecasting errors are also considered in this paper. Two mathematical models have been built for both the islanded and grid-connected modes of MGs. The main problem is formulated as a mixed linear integer problem (MLIP), which is solved in AMPL (A Modeling Language for Mathematical Programming). The effectiveness of the approach is validated by case studies where the optimal system energy storage ratings for the islanded and grid-connected MGs are determined. Quantitative results show that the optimal size of BESS exists and differs for both the grid-connected and islanded MGs in this paper.",
"In this paper, we propose a model for households to share energy from community energy storage (CES) such that both households and utility company benefit from CES. In addition to providing a range of ancillary grid services, CES can also be used for demand side management, to shave peaks and fill valleys in system load. We introduce a method stemming from consumer theory and cooperative game theory that uses CES to balance the load of an entire locality and manage household energy allocations respectively. Load balancing is derived as a geometric programming problem. Each households contribution to overall non-uniformity of the load profile is modeled using a characteristic function and Shapley values are used to allocate the amount and price of surplus energy stored in CES. The proposed method is able to perfectly balance the load while also making sure that each household is guaranteed a reduction in energy costs.",
"A power system is a collection of individual components that compete for system resources. This paper presents a game theoretic approach to the control decision process of individual sources and loads in small-scale and dc power systems. Framing the power system as a game between players facilitates the definition of individual objectives, which adds modularity and adaptability. The proposed methodology enhances the reliability and robustness of the system by avoiding the need for a central or supervisory control. It is also a way to integrate and combine supply and demand side management into a single approach. Examples are presented that use a simple nine bus dc power system to demonstrate the proposed method for various scenarios and player formulations.",
"The future smart grid is envisioned as a large scale cyberphysical system encompassing advanced power, communications, control, and computing technologies. To accommodate these technologies, it will have to build on solid mathematical tools that can ensure an efficient and robust operation of such heterogeneous and large-scale cyberphysical systems. In this context, this article is an overview on the potential of applying game theory for addressing relevant and timely open problems in three emerging areas that pertain to the smart grid: microgrid systems, demand-side management, and communications. In each area, the state-of-the-art contributions are gathered and a systematic treatment, using game theory, of some of the most relevant problems for future power systems is provided. Future opportunities for adopting game-theoretic methodologies in the transition from legacy systems toward smart and intelligent grids are also discussed. In a nutshell, this article provides a comprehensive account of the application of game theory in smart grid systems tailored to the interdisciplinary characteristics of these systems that integrate components from power systems, networking, communications, and control.",
"For the efficient operation of the smart grid, it is important that there is an instant by instant matching between the electricity supply and the power consumption. Electrical power storage provides a viable solution to managing power supply and electrical loads as well as unexpected imbalances. Electricity suppliers could deploy electricity storage facilities at various levels of the smart grid system: generation, transmission, substations and residential level. Storage would significantly address the power quality and reliability problems through peak shaving and frequency control. It also reduces the need for huge infrastructural expenditures by making them more efficient. At the residential level, smart storage together with dynamic pricing in the deregulated electricity markets presents the electricity suppliers with a strategy to achieve grid stability. In this paper, we consider a smart grid environment with a high penetration of households' storage batteries. By using an appropriate electricity price structure, the electricity supplier influences households' electricity consumption. On the other hand, the households aim to minimize their electricity bills by capitalizing on price fluctuation to schedule their electrical appliances and coordinate the charging and discharging of their batteries. The electricity supplier has a dynamic power limit for each hour that must not be exceeded by the hourly aggregate load of the households. Further, we assume that in supplying electrical power, the households' electrical devices are given priority over their storage devices. The policy is such that batteries will be charged by the residual power after the appliances loads have been satisfied. The households have to compete for the residual electricity so as to maximize the state of charge of their batteries. We have therefore modeled this system as a non-cooperative Nash equilibrium game where the households are considered as selfish but rational players whose objectives are to optimize their individual utilities.",
"The development of energy management tools for next-generation PhotoVoltaic (PV) installations, including storage units, provides flexibility to distribution system operators. In this paper, the aggregation and implementation of these determinist energy management methods for business customers in a microgrid power system are presented. This paper proposes a determinist energy management system for a microgrid, including advanced PV generators with embedded storage units and a gas microturbine. The system is organized according to different functions and is implemented in two parts: a central energy management of the microgrid and a local power management at the customer side. The power planning is designed according to the prediction for PV power production and the load forecasting. The central and local management systems exchange data and order through a communication network. According to received grid power references, additional functions are also designed to manage locally the power flows between the various sources. Application to the case of a hybrid supercapacitor battery-based PV active generator is presented.",
""
]
} |
1512.03440 | 2294775638 | This paper, by comparing three potential energy trading systems, studies the feasibility of integrating a community energy storage (CES) device with consumer-owned photovoltaic (PV) systems for demand-side management of a residential neighborhood area network. We consider a fully-competitive CES operator in a non-cooperative Stackelberg game, a benevolent CES operator that has socially favorable regulations with competitive users, and a centralized cooperative CES operator that minimizes the total community energy cost. The former two game-theoretic systems consider that the CES operator first maximizes their revenue by setting a price signal and trading energy with the grid. Then the users with PV panels play a non-cooperative repeated game following the actions of the CES operator to trade energy with the CES device and the grid to minimize energy costs. The centralized CES operator cooperates with the users to minimize the total community energy cost without appropriate incentives. The non-cooperative Stackelberg game with the fully-competitive CES operator has a unique Stackelberg equilibrium at which the CES operator maximizes revenue and users obtain unique Pareto-optimal Nash equilibrium CES energy trading strategies. Extensive simulations show that the fully-competitive CES model gives the best trade-off of operating environment between the CES operator and the users. | The Stackelberg game between a shared-facility controller and users in @cite_26 yields effective demand-side management by managing consumer demand with an energy storage device at the controller-side that is enabled to charge and discharge with the grid. In contrast, in this paper, the charging and discharging mechanism of the CES device is intended to accommodate energy trading strategies from PV energy generation of users. In doing so, we focus on exploiting onsite energy generation from user-owned PV systems for demand-side management as an alternative to energy consumption scheduling of users. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2051301990"
],
"abstract": [
"In this paper, the benefits of distributed energy resources are considered in an energy management scheme for a smart community consisting of a large number of residential units (RUs) and a shared facility controller (SFC). A noncooperative Stackelberg game between the RUs and the SFC is proposed in order to explore how both entities can benefit, in terms of achieved utility and minimizing total cost respectively, from their energy trading with each other and the grid. From the properties of the game, it is shown that the maximum benefit to the SFC, in terms of reduction in total cost, is obtained at the unique and strategy-proof Stackelberg equilibrium (SE). It is further shown that the SE is guaranteed to be reached by the SFC and RUs by executing the proposed algorithm in a distributed fashion, where participating RUs comply with their best strategies in response to the action chosen by the SFC. In addition, a charging–discharging scheme is introduced for the SFC's storage device that can further lower the SFC's total cost if the proposed game is implemented. Numerical experiments confirm the effectiveness of the proposed scheme."
]
} |
1512.02766 | 2280893574 | A tracking system that will be used for augmented reality applications has two main requirements: accuracy and frame rate. The first requirement is related to the performance of the pose estimation algorithm and how accurately the tracking system can find the position and orientation of the user in the environment. Accuracy problems of current tracking devices, considering that they are low-cost devices, cause static errors during this motion estimation process. The second requirement is related to dynamic errors (the end-to-end system delay, occurring because of the delay in estimating the motion of the user and displaying images based on this estimate). This paper investigates combining the vision-based estimates with measurements from other sensors, GPS and IMU, in order to improve the tracking accuracy in outdoor environments. The idea of using Fuzzy Adaptive Multiple Models was investigated using a novel fuzzy rule-based approach to decide on the model that results in improved accuracy and faster convergence for the fusion filter. Results show that the developed tracking system is more accurate than a conventional GPS–IMU fusion approach due to additional estimates from a camera and fuzzy motion models. The paper also presents an application in cultural heritage context running at modest frame rates due to the design of the fusion algorithm. | The literature presents examples of combining GPS with vision rather than inertial sensing. For instance, Schleicher @cite_4 used stereo cameras with a low-cost GPS receiver in order to perform vehicle localization with a submapping approach. Armesto @cite_11 used a fusion of vision and inertial sensors in order to perform pose estimation for an industrial robot by using the complementary characteristics of these sensors @cite_21 . GPS position was combined with visual landmarks (tracked in stereo) in order to obtain a global consistency in @cite_29 . A similar approach was followed by Agrawal @cite_32 on an expensive system using four computers. | {
"cite_N": [
"@cite_4",
"@cite_29",
"@cite_21",
"@cite_32",
"@cite_11"
],
"mid": [
"2143062563",
"1585381382",
"2088684053",
"2113898978",
"2057343530"
],
"abstract": [
"This paper presents a new real-time hierarchical (topological metric) simultaneous localization and mapping (SLAM) system. It can be applied to the robust localization of a vehicle in large-scale outdoor urban environments, improving the current vehicle navigation systems, most of which are only based on Global Positioning System (GPS). Then, it can be used on autonomous vehicle guidance with recurrent trajectories (bus journeys, theme park internal journeys, etc.). It is exclusively based on the information provided by both a low-cost, wide-angle stereo camera and a low-cost GPS. Our approach divides the whole map into local submaps identified by the so-called fingerprints (vehicle poses). In this submap level (low-level SLAM), a metric approach is carried out. There, a 3-D sequential mapping of visual natural landmarks and the vehicle location orientation are obtained using a top-down Bayesian method to model the dynamic behavior. GPS measurements are integrated within this low-level improving vehicle positioning. A higher topological level (high-level SLAM) based on fingerprints and the multilevel relaxation (MLR) algorithm has been added to reduce the global error within the map, keeping real-time constraints. This level provides nearly consistent estimation, keeping a small degradation with GPS unavailability. Some experimental results for large-scale outdoor urban environments are presented, showing an almost constant processing time.",
"We consider the problem of autonomous navigation in an unstructured outdoor environment. The goal is for a small outdoor robot to come into a new area, learn about and map its environment, and move to a given goal at modest speeds (1 m s). This problem is especially difficult in outdoor, off-road environments, where tall grass, shadows, deadfall, and other obstacles predominate. Not surprisingly, the biggest challenge is acquiring and using a reliable map of the new area. Although work in outdoor navigation has preferentially used laser rangefinders [14,2,6], we use stereo vision as the main sensor. Vision sensors allow us to use more distant objects as landmarks for navigation, and to learn and use color and texture models of the environment, in looking further ahead than is possible with range sensors alone.",
"Augmented reality has been an active area ofresearch for the last two decades or so. This paper presents acomprehensive review of the recent literature on trackingmethods used in Augmented Reality applications, both forindoor and outdoor environments. After critical discussion ofthe methods used for tracking, the paper identifies limitations ofthe state-of-the-art techniques and suggests potential futuredirections to overcome the bottlenecks.",
"We consider the problem of autonomous navigation in unstructured outdoor terrains using vision sensors. The goal is for a robot to come into a new environment, map it and move to a given goal at modest speeds (1 m sec). The biggest challenges are in building good maps and keeping the robot well localized as it advances towards the goal. In this paper, we concentrate on showing how it is possible to build a consistent, globally correct map in real time, using efficient precise stereo algorithms for map making and visual odometry for localization. While we have made advances in both localization and mapping using stereo vision, it is the integration of the techniques that is the biggest contribution of the research. The validity of our approach is tested in blind experiments, where we submit our code to an independent testing group that runs and validates it on an outdoor robot",
"This paper presents a tracking system for ego-motion estimation which fuses vision and inertial measurements using EKF and UKF (Extended and Unscented Kalman Filters), where a comparison of their performance has been done. It also considers the multi-rate nature of the sensors: inertial sensing is sampled at a fast sampling frequency while the sampling frequency of vision is lower. the proposed approach uses a constant linear acceleration model and constant angular velocity model based on quaternions, which yields a non-linear model for states and a linear model in measurement equations. Results show that a significant improvement is obtained on the estimation when fusing both measurements with respect to just vision or just inertial measurements. It is also shown that the proposed system can estimate fast-motions even when vision system fails. Moreover, a study of the influence of the noise covariance is also performed, which aims to select their appropriate values at the tuning process. The setup is an end-effector mounted camera, which allow us to pre-define basic rotational and translational motions for validating results."
]
} |
1512.02766 | 2280893574 | A tracking system that will be used for augmented reality applications has two main requirements: accuracy and frame rate. The first requirement is related to the performance of the pose estimation algorithm and how accurately the tracking system can find the position and orientation of the user in the environment. Accuracy problems of current tracking devices, considering that they are low-cost devices, cause static errors during this motion estimation process. The second requirement is related to dynamic errors (the end-to-end system delay, occurring because of the delay in estimating the motion of the user and displaying images based on this estimate). This paper investigates combining the vision-based estimates with measurements from other sensors, GPS and IMU, in order to improve the tracking accuracy in outdoor environments. The idea of using Fuzzy Adaptive Multiple Models was investigated using a novel fuzzy rule-based approach to decide on the model that results in improved accuracy and faster convergence for the fusion filter. Results show that the developed tracking system is more accurate than a conventional GPS–IMU fusion approach due to additional estimates from a camera and fuzzy motion models. The paper also presents an application in cultural heritage context running at modest frame rates due to the design of the fusion algorithm. | Visual-inertial tracking has also become a popular technique, due to the complementary characteristics of the sensors, and is used in many different applications @cite_25 . Vision allows estimation of the camera position directly from the images observed @cite_34 . However, it is not robust against 3D transformations, and the computation is expensive. For inertial trackers, noise and calibration errors can result in an accumulation of position and orientation errors. It is known that inertial sensors have long term stability problems @cite_25 . Vision is good for small acceleration and velocity. When these sensors are used together, faster computation can be achieved with inertial sensors and the drift errors of the inertial sensor can be corrected using vision. Applications generally use low frequency vision data and high frequency inertial data @cite_20 since visual processing is more expensive and trackers today can generate estimates at rates up to @math Hz using custom hardware @cite_24 . | {
"cite_N": [
"@cite_24",
"@cite_34",
"@cite_25",
"@cite_20"
],
"mid": [
"1845377805",
"2099639837",
"2074013343",
""
],
"abstract": [
"Augmented Reality applications require the tracking of moving objects in real-time. Tracking is defined as the measurement of object position and orientation in a scene coordinate system. We present a new combination of silicon micromachined accelerometers and gyroscopes which have been assembled into a six degree of freedom (6 DoF) inertial tracking system. This inertial tracker is used in combination with a vision-based tracking system which will enable us to build affordable, light-weight, fully mobile tracking systems for Augmented Reality applications in the future.",
"Our work stems from a program focused on developing tracking technologies for wide-area augmented realities in unprepared outdoor environments. Other participants in the Defense Advanced Research Projects Agency (Darpa) funded Geospatial Registration of Information for Dismounted Soldiers (Grids) program included University of North Carolina at Chapel Hill and Raytheon. We describe a hybrid orientation tracking system combining inertial sensors and computer vision. We exploit the complementary nature of these two sensing technologies to compensate for their respective weaknesses. Our multiple-sensor fusion is novel in augmented reality tracking systems, and the results demonstrate its utility.",
"This paper presents a method to fuse measurements from a rigid sensor rig with a stereo vision system and a set of 6 DOF inertial sensors for egomotion estimation and external structure estimation. No assumptions about the sampling rate of the two sensors are made. The basic idea is a common state vector and a common dynamic description which is stored together with the time instant of the estimation. Every time one of the sensor sends new data, the corresponding filter equation is updated and a new estimation is generated. In this paper the filter equations for an extended Kalman filter are derived together with considerations of the tuning. Simulations with real sensor data show the successful implementation of this concept. © 2004 Wiley Periodicals, Inc.",
""
]
} |
1512.02766 | 2280893574 | A tracking system that will be used for augmented reality applications has two main requirements: accuracy and frame rate. The first requirement is related to the performance of the pose estimation algorithm and how accurately the tracking system can find the position and orientation of the user in the environment. Accuracy problems of current tracking devices, considering that they are low-cost devices, cause static errors during this motion estimation process. The second requirement is related to dynamic errors (the end-to-end system delay, occurring because of the delay in estimating the motion of the user and displaying images based on this estimate). This paper investigates combining the vision-based estimates with measurements from other sensors, GPS and IMU, in order to improve the tracking accuracy in outdoor environments. The idea of using Fuzzy Adaptive Multiple Models was investigated using a novel fuzzy rule-based approach to decide on the model that results in improved accuracy and faster convergence for the fusion filter. Results show that the developed tracking system is more accurate than a conventional GPS–IMU fusion approach due to additional estimates from a camera and fuzzy motion models. The paper also presents an application in cultural heritage context running at modest frame rates due to the design of the fusion algorithm. | Recently, Oskiper @cite_12 developed a tightly-coupled EKF visual--inertial tracking system for AR for outdoor environments using a relatively expensive sensor (XSens, MTi-G). The system used feature-level tracking in each frame and measurements from the GPS in order to reduce drift. In addition to this, a digital elevation map of the environment was used as well as a pre-built landmark database for tracking in indoor environments where GPS reception is not available (although it was claimed that no assumption about the environment was made). The error was found to be @math metres. | {
"cite_N": [
"@cite_12"
],
"mid": [
"1996213997"
],
"abstract": [
"Camera tracking system for augmented reality applications that can operate both indoors and outdoors is described. The system uses a monocular camera, a MEMS-type inertial measurement unit (IMU) with 3-axis gyroscopes and accelerometers, and GPS unit to accurately and robustly track the camera motion in 6 degrees of freedom (with correct scale) in arbitrary indoor or outdoor scenes. IMU and camera fusion is performed in a tightly coupled manner by an error-state extended Kalman filter (EKF) such that each visually tracked feature contributes as an individual measurement as opposed to the more traditional approaches where camera pose estimates are first extracted by means of feature tracking and then used as measurement updates in a filter framework. Robustness in feature tracking and hence in visual measurement generation is achieved by IMU aided feature matching and a two-point relative pose estimation method, to remove outliers from the raw feature point matches. Landmark matching to contain long-term drift in orientation via on the fly user generated geo-tiepoint mechanism is described."
]
} |
1512.02766 | 2280893574 | A tracking system that will be used for augmented reality applications has two main requirements: accuracy and frame rate. The first requirement is related to the performance of the pose estimation algorithm and how accurately the tracking system can find the position and orientation of the user in the environment. Accuracy problems of current tracking devices, considering that they are low-cost devices, cause static errors during this motion estimation process. The second requirement is related to dynamic errors (the end-to-end system delay, occurring because of the delay in estimating the motion of the user and displaying images based on this estimate). This paper investigates combining the vision-based estimates with measurements from other sensors, GPS and IMU, in order to improve the tracking accuracy in outdoor environments. The idea of using Fuzzy Adaptive Multiple Models was investigated using a novel fuzzy rule-based approach to decide on the model that results in improved accuracy and faster convergence for the fusion filter. Results show that the developed tracking system is more accurate than a conventional GPS–IMU fusion approach due to additional estimates from a camera and fuzzy motion models. The paper also presents an application in cultural heritage context running at modest frame rates due to the design of the fusion algorithm. | Attempts to improve the accuracy of the filtering have also been made using adaptive approaches. In some studies, values for the state and measurement covariance matrices were updated based on the innovation @cite_23 and recently fuzzy logic was used for this task @cite_9 @cite_5 . Another approach for fusing accelerometer and gyroscope for attitude estimation is also based on fuzzy rules @cite_10 in order to decide which of the accelerometer or the gyroscope will be given weight for estimation based on observations from these sensors such as whether a mobile robot is rotating or not. A later approach @cite_30 used the error and dynamic motion parameters in order to decide which sensor should have a dominant effect on the estimation. | {
"cite_N": [
"@cite_30",
"@cite_9",
"@cite_23",
"@cite_5",
"@cite_10"
],
"mid": [
"2013357924",
"2008336375",
"1996567593",
"1984096072",
"1599326394"
],
"abstract": [
"Abstract This paper describes the development of a fuzzy logic based closed-loop strapdown attitude reference system (SARS) algorithm, integrated filtering estimator for determining attitude reference, for unmanned aerial vehicles (UAVs) using low-cost solid-state inertial sensors. The SARS for this research consists of three single-axis rate gyros in conjunction with two single-axis accelerometers. For the solution scheme fuzzy modules (rules and reasoning) are utilized for online scheduling of the parameters for the filtering estimator. Implementation using experimental flight test data of SURV-1 Sejong UAV has been performed in order to verify the estimation. The proposed fuzzy logic aided estimation results demonstrate that more accurate performance can be achieved in comparison with conventional fixed parameter filtering estimators. The estimation results were compared with the on-board vertical gyro used as the reference standard or ‘truth model’ for this analysis.",
"In this paper, the application of the fuzzy interacting multiple model unscented Kalman filter (FUZZY-IMMUKF) approach to integrated navigation processing for the maneuvering vehicle is presented. The unscented Kalman filter (UKF) employs a set of sigma points through deterministic sampling, such that a linearization process is not necessary, and therefore the errors caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. The nonlinear filters naturally suffer, to some extent, the same problem as the EKF for which the uncertainty of the process noise and measurement noise will degrade the performance. As a structural adaptation (model switching) mechanism, the interacting multiple model (IMM), which describes a set of switching models, can be utilized for determining the adequate value of process noise covariance. The fuzzy logic adaptive system (FLAS) is employed to determine the lower and upper bounds of the system noise through the fuzzy inference system (FIS). The resulting sensor fusion strategy can efficiently deal with the nonlinear problem for the vehicle navigation. The proposed FUZZY-IMMUKF algorithm shows remarkable improvement in the navigation estimation accuracy as compared to the relatively conventional approaches such as the UKF and IMMUKF.",
"One of the most important tasks in integration of GPS INS is to choose the realistic dynamic model covariance matrix Q and measurement noise covariance matrix R for use in the Kalman filter. The performance of the methods to estimate both of these matrices depends entirely on the minimization of dynamic and measurement update errors that lead the filter to converge. This paper evaluates the performances of adaptive Kalman filter methods with different adaptations. Innovation and residual based adaptive Kalman filters were employed for adapting R and Q. These methods were implemented in a loose GPS INS integration system and tested using real data sets. Their performances have been evaluated and compared. Their limitations in real-life engineering applications are discussed.",
"The necessity of accurate localization in mobile robotics is obvious—if a robot does not know where it is, it cannot navigate accurately and reach goal locations. Robots learn about their environment via sensors. Small robots require small, efficient, and, if they are to be deployed in large numbers, inexpensive sensors. The sensors used by robots to perceive the world are inherently inaccurate, providing noisy, erroneous data, or even no data at all. Combined with estimation error due to imperfect modeling of the robot, there are many obstacles to successfully localizing in the world. Sensor fusion is used to overcome these difficulties—combining the available sensor data to derive a more accurate pose estimation for the robot. A feeling of “ready-fire-aim'' pervades the discipline—filters are chosen on little to no information, and new filters are simply tested against a few peers and claimed as superior to all others. This is folly—the most appropriate filter is seldom the newest. This article provides an overview and in-depth tutorial of all modern robot localization methods and thoroughly discusses their strengths and weaknesses to assist a robot researcher in the task of choosing the most appropriate filter for their task. © 2012 Wiley Periodicals, Inc. © 2012 Wiley Periodicals, Inc.",
"Most mobile robots use a combination of absolute and relative sensing techniques for position estimation. Relative positioning techniques are generally known as dead-reckoning. Many systems use odometry as their only dead-reckoning means. However, fiber optic gyroscopes have become more affordable and are being used on many platforms to supplement odometry, especially in indoor applications. Still, if the terrain is not level (i.e., rugged or rolling terrain), the tilt of the vehicle introduces errors into the conversion of gyro readings to vehicle heading. In order to overcome this problem vehicle tilt must be measured and factored into the heading computation. The paper introduces a new fuzzy logic expert rule-based navigation (FLEXnav) method for fusing data from multiple low- to medium-cost gyroscopes and accelerometers in order to estimate accurately the heading and tilt of a mobile robot. Experimental results of mobile robot runs over rugged terrain are presented, showing the effectiveness of our FLEXnav method."
]
} |
1512.02766 | 2280893574 | A tracking system that will be used for augmented reality applications has two main requirements: accuracy and frame rate. The first requirement is related to the performance of the pose estimation algorithm and how accurately the tracking system can find the position and orientation of the user in the environment. Accuracy problems of current tracking devices, considering that they are low-cost devices, cause static errors during this motion estimation process. The second requirement is related to dynamic errors (the end-to-end system delay, occurring because of the delay in estimating the motion of the user and displaying images based on this estimate). This paper investigates combining the vision-based estimates with measurements from other sensors, GPS and IMU, in order to improve the tracking accuracy in outdoor environments. The idea of using Fuzzy Adaptive Multiple Models was investigated using a novel fuzzy rule-based approach to decide on the model that results in improved accuracy and faster convergence for the fusion filter. Results show that the developed tracking system is more accurate than a conventional GPS–IMU fusion approach due to additional estimates from a camera and fuzzy motion models. The paper also presents an application in cultural heritage context running at modest frame rates due to the design of the fusion algorithm. | Some other studies suggest @cite_20 or use @cite_2 @cite_27 @cite_28 @cite_16 the idea of employing different motion models for recognizing the type of the motion for two-view motion estimation and visual SLAM. Different studies @cite_2 @cite_27 @cite_28 used geometric two-view relations such as general, affine or homography in order to fit these models to a set of correspondences and using the outliers for obtaining a penalty score in a Bayesian framework. | {
"cite_N": [
"@cite_28",
"@cite_27",
"@cite_2",
"@cite_16",
"@cite_20"
],
"mid": [
"2114129181",
"2152329191",
"1578412938",
"2165220145",
""
],
"abstract": [
"Multibody structure-and-motion (MSaM) is the problem in establishing the multiple-view geometry of several views of a 3D scene taken at different times, where the scene consists of multiple rigid objects moving relative to each other. We examine the case of two views. The setting is the following: Given are a set of corresponding image points in two images, which originate from an unknown number of moving scene objects, each giving rise to a motion model. Furthermore, the measurement noise is unknown, and there are a number of gross errors, which are outliers to all models. The task is to find an optimal set of motion models for the measurements. It is solved through Monte-Carlo sampling, careful statistical analysis of the sampled set of motion models, and simultaneous selection of multiple motion models to best explain the measurements. The framework is not restricted to any particular model selection mechanism because it is developed from a Bayesian viewpoint: different model selection criteria are seen as different priors for the set of moving objects, which allow one to bias the selection procedure for different purposes.",
"We first investigate the meaning of \"statistical methods\" for geometric inference based on image feature points. Tracing back the origin of feature uncertainty to image processing operations, we discuss the implications of asymptotic analysis in reference to \"geometric fitting\" and \"geometric model selection\" and point out that a correspondence exists between the standard statistical analysis and the geometric inference problem. Then, we derive the \"geometric AIC\" and the \"geometric MDL\" as counterparts of Akaike's AIC and Rissanen's MDL. We show by experiments that the two criteria have contrasting characteristics in detecting degeneracy.",
"Computer vision often involves estimating models from visual input. Sometimes it is possible to fit several different models or hypotheses to a set of data, and a decision must be made as to which is most appropriate. This paper explores ways of automating the model selection process with specific emphasis on the least squares problem of fitting manifolds (in particular algebraic varieties e.g. lines, algebraic curves, planes etc.) to data points, illustrated with respect to epipolar geometry. The approach is Bayesian and the contribution three fold, first a new Bayesian description of the problem is laid out that supersedes the author's previous maximum likelihood formulations, this formulation will reveal some hidden elements of the problem. Second an algorithm, ‘MAPSAC’, is provided to obtain the robust MAP estimate of an arbitrary manifold. Third, a Bayesian model selection paradigm is proposed, the Bayesian formulation of the manifold fitting problem uncovers an elegant solution to this problem, for which a new method ‘GRIC’ for approximating the posterior probability of each putative model is derived. This approximations bears some similarity to the penalized likelihoods used by AIC, BIC and MDL however it is far more accurate in situations involving large numbers of latent variables whose number increases with the data. This will be empirically and theoretically demonstrated.",
"Recent work has demonstrated the benefits of adopting a fully probabilistic SLAM approach in sequential motion and structure estimation from an image sequence. Unlike standard Structure from Motion (SFM) methods, this 'monocular SLAM' approach is able to achieve drift-free estimation with high frame-rate real-time operation, particularly benefitting from highly efficient active feature search, map management and mismatch rejection. A consistent thread in this research on real-time monocular SLAM has been to reduce the assumptions required. In this paper we move towards the logical conclusion of this direction by implementing a fully Bayesian Interacting Multiple Models (IMM) framework which can switch automatically between parameter sets in a dimensionless formulation of monocular SLAM. Remarkably, our approach of full sequential probability propagation means that there is no need for penalty terms to achieve the Occam property of favouring simpler models - this arises automatically. We successfully tackle the known stiffness in on-the-fly monocular SLAM start up without known patterns in the scene. The search regions for matches are also reduced in size with respect to single model EKF increasing the rejection of spurious matches. We demonstrate our method with results on a complex real image sequence with varied motion.",
""
]
} |
1512.02766 | 2280893574 | A tracking system that will be used for augmented reality applications has two main requirements: accuracy and frame rate. The first requirement is related to the performance of the pose estimation algorithm and how accurately the tracking system can find the position and orientation of the user in the environment. Accuracy problems of current tracking devices, considering that they are low-cost devices, cause static errors during this motion estimation process. The second requirement is related to dynamic errors (the end-to-end system delay, occurring because of the delay in estimating the motion of the user and displaying images based on this estimate). This paper investigates combining the vision-based estimates with measurements from other sensors, GPS and IMU, in order to improve the tracking accuracy in outdoor environments. The idea of using Fuzzy Adaptive Multiple Models was investigated using a novel fuzzy rule-based approach to decide on the model that results in improved accuracy and faster convergence for the fusion filter. Results show that the developed tracking system is more accurate than a conventional GPS–IMU fusion approach due to additional estimates from a camera and fuzzy motion models. The paper also presents an application in cultural heritage context running at modest frame rates due to the design of the fusion algorithm. | Civera @cite_16 used a bank of EKFs in order to apply different motion models to several filters concurrently and select the best model in a probabilistic framework. This approach incorporated 3 motion models, namely stationary, rotating and general, separating models for motions including translations and rotations. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2165220145"
],
"abstract": [
"Recent work has demonstrated the benefits of adopting a fully probabilistic SLAM approach in sequential motion and structure estimation from an image sequence. Unlike standard Structure from Motion (SFM) methods, this 'monocular SLAM' approach is able to achieve drift-free estimation with high frame-rate real-time operation, particularly benefitting from highly efficient active feature search, map management and mismatch rejection. A consistent thread in this research on real-time monocular SLAM has been to reduce the assumptions required. In this paper we move towards the logical conclusion of this direction by implementing a fully Bayesian Interacting Multiple Models (IMM) framework which can switch automatically between parameter sets in a dimensionless formulation of monocular SLAM. Remarkably, our approach of full sequential probability propagation means that there is no need for penalty terms to achieve the Occam property of favouring simpler models - this arises automatically. We successfully tackle the known stiffness in on-the-fly monocular SLAM start up without known patterns in the scene. The search regions for matches are also reduced in size with respect to single model EKF increasing the rejection of spurious matches. We demonstrate our method with results on a complex real image sequence with varied motion."
]
} |
1512.02922 | 2292623060 | MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user’s skeleton in realtime and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a fullbody avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user’s sense immersion in VR. | An early example of the use of a real object associated to a similar but not identical virtual object was in a desktop VR environment, where a doll head model was used to control a brain visualization @cite_31 . Passive haptics have been shown to both enhance immersion in VR and also make virtual tasks easier to accomplish by providing haptic feedback. For example, Hoffman found adding representations of real objects, that can be touched, to immersive virtual environments enhanced the feeling of presence in those environments @cite_10 . Lindeman found that physical constraints provided by a real object could significantly improve performance in an immersive virtual manipulation task @cite_7 . For example, the presence of a real tablet and a pen enabled users to easily enter virtual handwritten commands and annotations @cite_33 . | {
"cite_N": [
"@cite_31",
"@cite_10",
"@cite_33",
"@cite_7"
],
"mid": [
"2166175381",
"2542721959",
"2536874479",
"2102380381"
],
"abstract": [
"",
"The study explored the impact of physically touching a virtual object on how realistic the VE seems to the user. Subjects in a \"no touch\" group picked up a 3D virtual image of a kitchen plate in a VE, using a traditional 3D wand. \"See and touch\" subjects physically picked up a virtual plate possessing solidity and weight, using a mixed-reality force feedback technique. Afterwards, subjects made predictions about the properties of other virtual objects they saw but did not interact with in the VE. \"See and touch\" subjects predicted these objects would be more solid, heavier, and more likely to obey gravity than the \"no touch\" group. Results provide converging evidence for the value of adding physical qualities to virtual objects. The study first empirically demonstrates the effectiveness of mixed reality as a simple, safe, inexpensive technique for adding physical texture and force feedback cues to virtual objects with large freedom of motion. Examples of practical applications are discussed.",
"We present Virtual Notepad, a collection of interface tools that allows the user to take notes, annotate documents and input text using a pen, while still immersed in virtual environments (VEs). Using a spatially-tracked, pressure-sensitive graphics tablet, pen and handwriting recognition software, Virtual Notepad explores handwriting as a new modality for interaction in immersive VEs. This paper reports details of the Virtual Notepad interface and interaction techniques, discusses implementation and design issues, reports the results of initial evaluation and overviews possible applications of virtual handwriting.",
"This paper reports empirical results from a study into the useof 2D widgets in 3D immersive virtual environments. Severalresearchers have proposed the use of 2D interaction techniques in3D environments, however little empirical work has been done totest the usability of such approaches. We present the results oftwo experiments conducted on low-level 2D manipulation tasks withinan immersive virtual environment. We empirically show that theaddition of passive-haptic feedback for use in precise UImanipulation tasks can significantly increase user performance.Furthermore, users prefer interfaces that provide a physicalsurface, and that allow them to work with interface widgets in thesame visual field of view as the objects they are modifying."
]
} |
1512.02922 | 2292623060 | MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user’s skeleton in realtime and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a fullbody avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user’s sense immersion in VR. | In another study, participants had to place a book on a chair at the opposite end of a room by walking across a ledge over a deep pit. The researchers added a real wooden plank for users to walk on, that corresponded with a virtual ledge, and compared the differences in response when the ledge was only virtual. Since participants could feel a height difference between the wooden plank and the floor below, it enhanced the illusion of standing on the edge of a pit. Results showed significant difference in behavioral presence, heart rate, and skin conductivity changes in the wooden plank group of users @cite_53 . Passive haptics have also been used in VR therapy. In one study, a physical spider toy replica was present in the virtual environment so that when the patient's virtual hand reached to touch the virtual spider, she could feel the furry texture of the toy spider @cite_51 . | {
"cite_N": [
"@cite_53",
"@cite_51"
],
"mid": [
"1489557789",
"2154968673"
],
"abstract": [
"One of the most disconcertingly unnatural properties of most virtual environments (VEs) is the ability of the user to pass through objects. I hypothesize that passive haptics, augmenting a high-fidelity visual virtual environment with low-fidelity physical objects, will markedly improve both sense of presence and spatial knowledge training transfer. The low-fidelity physical models can be constructed from cheap, easy-to-assemble materials such as styrofoam, plywood, and particle board. The first study investigated the effects of augmenting a visual-cliff environment with a slight physical ledge on participants' sense of presence. I found when participants experienced passive haptics in the VE, they exhibited significantly more behaviors associated with pit avoidance than when experiencing the non-augmented VE. Changes in heart rate and skin conductivity were significantly higher than when they experienced the VE without passive haptics. The second study investigated passive haptics' effects on performance of a real-world navigation task after training in a virtual environment. Half of the participants trained on maze navigation in a VE augmented with a styrofoam physical model, while half trained in a non-augmented VE but were given visual and audio contact cues. The task was to gain as much information as possible about the layout of the environment. Participants knew before the VE session that their training would be tested by navigating an identical real maze environment while blindfolded. Significant differences in the time to complete the blindfolded navigation task and significant differences in the number of collisions with objects were found between the participants trained in an augmented VE and the participants trained in a non-augmented VE. 11 of 15 participants trained without passive haptics bumped into the next-to-last obstacle encountered in the testing session and turned the wrong direction to navigate around it: only 2 of 15 participants trained with passive haptics made the same navigation error. On the other hand, the assessment of the participants' cognitive maps of the virtual environment did not find significant differences between groups as measured by sketch maps and object dimension estimation.",
"Abstract This is the first case report to demonstrate the efficacy of immersive computer-generated virtual reality (VR) and mixed reality (touching real objects which patients also saw in VR) for the treatment of spider phobia. The subject was a 37-yr-old female with severe and incapacitating fear of spiders. Twelve weekly 1-hr sessions were conducted over a 3-month period. Outcome was assessed on measures of anxiety, avoidance, and changes in behavior toward real spiders. VR graded exposure therapy was successful for reducing fear of spiders, providing converging evidence for a growing literature showing the effectiveness of VR as a new medium for exposure therapy."
]
} |
1512.02922 | 2292623060 | MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user’s skeleton in realtime and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a fullbody avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user’s sense immersion in VR. | In the real world, people have a physical presence in their spatial environment. We are surrounded by objects and landscape that facilitate our sense of orientation and spatial understanding @cite_6 . For example, the horizon gives us a sense of directional information, occlusions give us relative distance cues while atmospheric color, fog, lighting, and shadow provide depth cues @cite_9 . People also develop skills to manipulate objects in their environment, such as picking up, positioning, altering, and arranging objects @cite_6 . Our VR system uses these skills along with people's innate sense of motion and proprioception for interacting naturally with the virtual world, grasping and manipulating objects in both the real and virtual environments. | {
"cite_N": [
"@cite_9",
"@cite_6"
],
"mid": [
"1561256858",
"1997447598"
],
"abstract": [
"Foreword. Preface. I. FOUNDATIONS OF 3D USER INTERFACES . 1. Introduction to 3D User Interfaces. What Are 3D User Interfaces? Why 3D User Interfaces? Terminology. Application Areas. Conclusion. 2. 3D User Interfaces: History and Roadmap. History of 3D UIs. Roadmap to 3D UIs. Scope of This Book. Conclusion. II. Hardware Technologies for 3D User Interfaces. 3. 3D User Interface Output Hardware. Introduction. Visual Displays. Auditory Displays. Haptic Displays. Design Guidelines: Choosing Output Devices for 3D User Interfaces. Conclusion. 4. 3D User Interface Input Hardware. Introduction. Desktop Input Devices. Tracking Devices. 3D Mice. Special-Purpose Input Devices. Direct Human Input. Home-Brewed Input Devices. Choosing Input Devices for 3D Interfaces. III. 3D INTERACTION TECHNIQUES. 5. Selection and Manipulation. Introduction. 3D Manipulation Tasks. Manipulation Techniques and Input Devices. Interaction Techniques for 3D Manipulation. Design Guidelines. 6. Travel. Introduction. 3D Travel Tasks. Travel Techniques. Design Guidelines. 7. Wayfinding. Introduction. Theoretical Foundations. User-Centered Wayfinding Support. Environment-Centered Wayfinding Support. Evaluating Wayfinding Aids. Design Guidelines. Conclusion. 8. System Control. Introduction. Classification. Graphical Menus. Voice Commands. Gestural Commands. Tools. Multimodal System Control Techniques. Design Guidelines. Case Study: Mixing System Control Methods. 8.10. Conclusion. 9. Symbolic Input. Introduction. Symbolic Input Tasks. Symbolic Input Techniques. Design Guidelines. Beyond Text and Number Entry. IV. DESIGNING AND DEVELOPING 3D USER INTERFACES. 10. Strategies for Designing and Developing 3D User Interfaces. Introduction. Designing for Humans. Inventing 3D User Interfaces. Design Guidelines. 11. Evaluation of 3D User Interfaces. Introduction. Background. Evaluation Metrics for 3D Interfaces. Distinctive Characteristics of 3D Interface Evaluation. Classification of 3D Evaluation Methods. Two Multimethod Approaches. Guidelines for 3D Interface Evaluation. V. THE FUTURE OF 3D USER INTERFACES. 12. Beyond Virtual: 3D User Interfaces for the Real World. Introduction. AR Interfaces as 3D Data Browsers. 3D Augmented Reality Interfaces. Augmented Surfaces and Tangible Interfaces. Tangible AR Interfaces. Agents in AR. Transitional AR-VR Interfaces. Conclusion. 13. The Future of 3D User Interfaces. Questions about 3D UI Technology. Questions about 3D Interaction Techniques. Questions about 3D UI Design and Development. Questions about 3D UI Evaluation. Million-Dollar Questions. Appendix A: Quick Reference Guide to 3D User Interface Mathematics. Scalars. Vectors. Points. Matrices. Quaternions. Bibliography. Index.",
"We are in the midst of an explosion of emerging human-computer interaction techniques that redefine our understanding of both computers and interaction. We propose the notion of Reality-Based Interaction (RBI) as a unifying concept that ties together a large subset of these emerging interaction styles. Based on this concept of RBI, we provide a framework that can be used to understand, compare, and relate current paths of recent HCI research as well as to analyze specific interaction designs. We believe that viewing interaction through the lens of RBI provides insights for design and uncovers gaps or opportunities for future research."
]
} |
1512.02922 | 2292623060 | MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user’s skeleton in realtime and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a fullbody avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user’s sense immersion in VR. | As humans, we are generally aware of the presence of others and since childhood are taught skills for social interaction. These include verbal and non-verbal communication, the ability to exchange physical objects, and the ability to work with others to collaborate on a task @cite_6 . Our system uses social awareness and skills by representing users's presence as full-body avatars and by making the avatars' actions visible. Additionally the environment allows for multiple co-located users to interact with each other both in the real and virtual worlds and collaborate on tasks. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1997447598"
],
"abstract": [
"We are in the midst of an explosion of emerging human-computer interaction techniques that redefine our understanding of both computers and interaction. We propose the notion of Reality-Based Interaction (RBI) as a unifying concept that ties together a large subset of these emerging interaction styles. Based on this concept of RBI, we provide a framework that can be used to understand, compare, and relate current paths of recent HCI research as well as to analyze specific interaction designs. We believe that viewing interaction through the lens of RBI provides insights for design and uncovers gaps or opportunities for future research."
]
} |
1512.02922 | 2292623060 | MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user’s skeleton in realtime and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a fullbody avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user’s sense immersion in VR. | Two primary approaches to the problem of mapping small physical spaces to large virtual spaces for navigation and maneuvering have been explored in research: Locomotion: walking in place (natural @cite_0 , powered shoes or other devices @cite_45 @cite_50 @cite_40 ), walking in space (natural @cite_54 , mechanical setups @cite_3 , redirection techniques @cite_29 @cite_19 ), gestural walking @cite_13 Abstractions or metaphors @cite_3 : miniaturized worlds, flying, driving, bicycling, teleporting, and virtual arm growing @cite_24 . | {
"cite_N": [
"@cite_54",
"@cite_29",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_40",
"@cite_45",
"@cite_24",
"@cite_50",
"@cite_13"
],
"mid": [
"1982131895",
"2066727705",
"1618163275",
"",
"2155301351",
"2004206016",
"",
"2042751882",
"",
"2138456729"
],
"abstract": [
"Our HiBall Tracking System generates over 2000 head-pose estimates per second with less than one millisecond of latency, and less than 0.5 millimeters and 0.02 degrees of position and orientation noise, everywhere in a 4.5 by 8.5 meter room. The system is remarkably responsive and robust, enabling VR applications and experiments that previously would have been difficult or even impossible. Previously we published descriptions of only the Kalman filter-based software approach that we call Single-Constraint-at-a-Time tracking. In this paper we describe the complete tracking system, including the novel optical, mechanical, electrical, and algorithmic aspects that enable the unparalleled performance.",
"Virtual Environments presented through head-mounted displays (HMDs) are often explored on foot. Exploration on foot is useful since the afferent and efferent cues of physical locomotion aid spatial awareness. However, the size of the virtual environment that can be explored on foot is limited to the dimensions of the tracking space of the HMD unless other strategies are used. This paper presents a system for exploring a large virtual environment on foot when the size of the physical surroundings is small by leveraging people's natural ability to spatially update. This paper presents three methods of \"resetting\" users when they reach the physical limits of the HMD tracking system. Resetting involves manipulating the users' location in physical space to move them out of the path of the physical obstruction while maintaining their spatial awareness of the virtual space.",
"A Complete Toolbox of Theories and Techniques The second edition of a bestseller, Handbook of Virtual Environments: Design, Implementation, and Applications presents systematic and extensive coverage of the primary areas of research and development within VE technology. It brings together a comprehensive set of contributed articles that address the principles required to define system requirements and design, build, evaluate, implement, and manage the effective use of VE applications. The contributors provide critical insights and principles associated with their given areas of expertise to provide extensive scope and detail on VE technology and its applications. Whats New in the Second Edition: Updated glossary of terms to promote common language throughout the community New chapters on olfactory perception, avatar control, motion sickness, and display design, as well as a whole host of new application areas Updated information to reflect the tremendous progress made over the last decade in applying VE technology to a growing number of domains This second edition includes nine new, as well as forty-one updated chapters that reflect the progress made in basic and applied research related to the creation, application, and evaluation of virtual environments. Contributions from leading researchers and practitioners from multidisciplinary domains provide a wealth of theoretical and practical information, resulting in a complete toolbox of theories and techniques that you can rely on to develop more captivating and effective virtual worlds. The handbook supplies a valuable resource for advancing VE applications as you take them from the laboratory to the real-world lives of people everywhere.",
"",
"Traveling through immersive virtual environments (IVEs) by means of real walking is an important activity to increase naturalness of VR-based interaction. However, the size of the virtual world often exceeds the size of the tracked space so that a straightforward implementation of omni-directional and unlimited walking is not possible. Redirected walking is one concept to solve this problem of walking in IVEs by inconspicuously guiding the user on a physical path that may differ from the path the user visually perceives. When the user approaches a virtual object she can be redirected to a real proxy object that is registered to the virtual counterpart and provides passive haptic feedback. In such passive haptic environments, any number of virtual objects can be mapped to proxy objects having similar haptic properties, e.g., size, shape and texture. The user can sense a virtual object by touching its real world counterpart. Redirecting a user to a registered proxy object makes it necessary to predict the user's intended position in the IVE. Based on this target position we determine a path through the physical space such that the user is guided to the registered proxy object. We present a taxonomy of possible redirection techniques that enable user guidance such that inconsistencies between visual and proprioceptive stimuli are imperceptible. We describe how a user's target in the virtual world can be predicted reliably and how a corresponding real-world path to the registered proxy object can be derived.",
"The CirculaFloor locomotion interface's movable tiles employ a holonomic mechanism to achieve omnidirectional motion. Users can thus maintain their position while walking in a virtual environment. The CirculaFloor method exploits both the treadmill and footpad, creating an infinite omnidirectional surface using a set of movable tiles. The tiles provide a sufficient area for walking, and thus precision tracing of the foot position is not required. This method has the potential to create an uneven surface by mounting an up-and-down mechanism on each tile. This article is available with a short video documentary on CD-ROM.",
"",
"The Go-Go immersive interaction technique uses the metaphor of interactively growing the user’s arm and non-linear mapping for reaching and manipulating distant objects. Unlike others, our technique allows for seamless direct manipulation of both nearby objects and those at a distance.",
"",
"Walking-In-Place (WIP) techniques have potential in terms of solving the problem arising when an immersive virtual environment offers a larger freedom of movement than the physical environment. Such techniques are particularly useful when the spatial constraints are very prominent, as they are likely to be in relation to immersive gaming systems located in the homes of consumers. However, most existing WIP techniques rely on movement of the legs which may cause users, wearing a head mounted display, to unintentionally move. This paper details a within-subjects study performed with the intention of investigating how two alternative types of gestural input relying on arm and hip movements compare to the traditional WIP gesture and keyboard input. Visual feedback was delivered through a head-mounted display and auditory feedback was provided by means of a 16-channel surround sound system. The gestures were evaluated in terms of perceived naturalness, presence and real world positional drift. The results suggest that both WIP and arm swinging are perceived as significantly more natural than hip movement and the keyboard configuration. However, arm swinging better matched real walking in terms of energy expenditure and led to significantly less positional drift."
]
} |
1512.02922 | 2292623060 | MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user’s skeleton in realtime and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a fullbody avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user’s sense immersion in VR. | We build upon the idea of natural locomotion in space allowing users to explore the virtual world by walking in the physical world. Not only do we track the user's position, we also track their full body as represented by 25 joints. This allows us to transfer the user's bodily motions to the onscreen avatar where a step taken forward in the real world is visible as an animation of a step taken forward by their avatar in the virtual world. A common limitation of a room-scale walking system is the difference in size of the physical and virtual worlds. A few mechanisms to overcome that difference have been explored. For example, a system that can imperceptibly rotate the virtual scene about the user @cite_41 @cite_42 , and a system that can apply redirection to steer users away from physical boundaries as they explore virtual environments @cite_18 . We are currently working on our own solution to allow exploration of large virtual spaces while walking in cluttered physical environments though that is not the focus of this paper. | {
"cite_N": [
"@cite_41",
"@cite_18",
"@cite_42"
],
"mid": [
"2083388465",
"2081752675",
""
],
"abstract": [
"Two large problems faced by virtual environment designers are lack of haptic feedback and constraints imposed by limited tracker space. Passive haptic feedback has been used effectively to provide a sense of touch to users (Insko, et al, 2001). Redirected walking is a promising solution to the problem of limited tracker space (Razzaque, et al, 2001). However, these solutions to these two problems are typically mutually exclusive because their requirements conflict with one another. We introduce a method by which they can be combined to address both problems simultaneously.",
"Over the past few years, virtual reality has experienced a resurgence. Fueled by a proliferation of consumer-level head-mounted display and motion tracking devices, an unprecedented quantity of immersive experiences and content are available for both desktop and mobile platforms. However, natural locomotion in immersive virtual environments remains a significant challenge. Many of the VR applications available to date require seated use or limit body movement within a small area, instead relying a gamepad or mouse keyboard for movement within the virtual world. Lacking support for natural walking, these virtual reality experiences do not fully replicate the physical and perceptual cues from the real world, and often fall short in maintaining the illusion that the user has been transported to another place. We present a virtual reality demonstration that supports infinite walking within a confined physical space. This is achieved using redirected walking, a class of techniques that introduce subtle discrepancies between physical and virtual motions [ 2001]. When employed properly, redirected walking can be stunningly effective. Previous research has made users believe that they walked in a straight line when they actually traveled in a wide circle, or that they walked between waypoints in a long virtual hallway when in fact they went back and forth between the same two points in the real world. While perceptually compelling, redirected walking is challenging to employ effectively in an unconstrained scenario because users' movements may often be unpredictable. Therefore, our recent research has focused on dynamic planning and optimization of redirected walking techniques, enabling the system to intelligently apply redirection as users explore virtual environments of arbitrary size and shape [ 2014b] [ 2014a]. In this Emerging Technologies exhibit, attendees will explore a large-scale, outdoor immersive virtual environment in a head-mounted display (see Figure 1). The demonstration will support natural walking within a physical area of at least 6x6m, using a wide-area motion tracking system provided by PhaseSpace Inc. The virtual reality scenario will instruct users to scout the environment while stopping to take panoramic photos at various locations in the virtual world. As users explore the environment, our automated planning algorithm will dynamically apply redirection to optimally steer them away from the physical boundaries of the exhibit, thus enabling the experience of limitless walking in a potentially infinite virtual world (see Figure 2).",
""
]
} |
1512.02922 | 2292623060 | MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user’s skeleton in realtime and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a fullbody avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user’s sense immersion in VR. | Though RGB-D sensing devices have been custom-built for years, it is the computer gaming and home entertainment applications that have made them available for research outside specialized computer vision groups. The quality of the depth sensing, given the low-cost and real-time nature of devices like the Kinect, is compelling, and has made the sensor popular with researchers and enthusiasts alike. Available 3D reconstruction systems like KinectFusion @cite_38 enable a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. Reconstructing geometry using active sensors @cite_46 , passive cameras @cite_28 @cite_17 , online images @cite_16 , or from unordered 3D points @cite_47 are well-studied areas of research in computer graphics and vision. There is also extensive literature within the AR and robotics community on Simultaneous Localization and Mapping (SLAM), aimed at tracking a user or robot while creating a map of the surrounding physical environment @cite_52 . Given this broad topic, and our need for building a VR environment that maps 1:1 to the physical environment, we used an existing reconstruction algorithm for 3D scanning. | {
"cite_N": [
"@cite_38",
"@cite_47",
"@cite_28",
"@cite_52",
"@cite_46",
"@cite_16",
"@cite_17"
],
"mid": [
"2099940712",
"2008073424",
"2033819227",
"",
"1986049249",
"2099443716",
"2171056981"
],
"abstract": [
"KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. Only the depth data from Kinect is used to track the 3D pose of the sensor and reconstruct, geometrically precise, 3D models of the physical scene in real-time. The capabilities of KinectFusion, as well as the novel GPU-based pipeline are described in full. Uses of the core system for low-cost handheld scanning, and geometry-aware augmented reality and physics-based interactions are shown. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction. These extensions are used to enable real-time multi-touch interactions anywhere, allowing any planar or non-planar reconstructed physical surface to be appropriated for touch.",
"We show that surface reconstruction from oriented points can be cast as a spatial Poisson problem. This Poisson formulation considers all the points at once, without resorting to heuristic spatial partitioning or blending, and is therefore highly resilient to data noise. Unlike radial basis function schemes, our Poisson approach allows a hierarchy of locally supported basis functions, and therefore the solution reduces to a well conditioned sparse linear system. We describe a spatially adaptive multiscale algorithm whose time and space complexities are proportional to the size of the reconstructed model. Experimenting with publicly available scan data, we demonstrate reconstruction of surfaces with greater detail than previously achievable.",
"From the Publisher: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.",
"",
"We describe a hardware and software system for digitizing the shape and color of large fragile objects under non-laboratory conditions. Our system employs laser triangulation rangefinders, laser time-of-flight rangefinders, digital still cameras, and a suite of software for acquiring, aligning, merging, and viewing scanned data. As a demonstration of this system, we digitized 10 statues by Michelangelo, including the well-known figure of David, two building interiors, and all 1,163 extant fragments of the Forma Urbis Romae, a giant marble map of ancient Rome. Our largest single dataset is of the David - 2 billion polygons and 7,000 color images. In this paper, we discuss the challenges we faced in building this system, the solutions we employed, and the lessons we learned. We focus in particular on the unusual design of our laser triangulation scanner and on the algorithms and software we developed for handling very large scanned models.",
"This paper introduces an approach for dense 3D reconstruction from unregistered Internet-scale photo collections with about 3 million images within the span of a day on a single PC (\"cloudless\"). Our method advances image clustering, stereo, stereo fusion and structure from motion to achieve high computational performance. We leverage geometric and appearance constraints to obtain a highly parallel implementation on modern graphics processors and multi-core architectures. This leads to two orders of magnitude higher performance on an order of magnitude larger dataset than competing state-of-the-art approaches.",
"We present a viewpoint-based approach for the quick fusion of multiple stereo depth maps. Our method selects depth estimates for each pixel that minimize violations of visibility constraints and thus remove errors and inconsistencies from the depth maps to produce a consistent surface. We advocate a two-stage process in which the first stage generates potentially noisy, overlapping depth maps from a set of calibrated images and the second stage fuses these depth maps to obtain an integrated surface with higher accuracy, suppressed noise, and reduced redundancy. We show that by dividing the processing into two stages we are able to achieve a very high throughput because we are able to use a computationally cheap stereo algorithm and because this architecture is amenable to hardware-accelerated (GPU) implementations. A rigorous formulation based on the notion of stability of a depth estimate is presented first. It aims to determine the validity of a depth estimate by rendering multiple depth maps into the reference view as well as rendering the reference depth map into the other views in order to detect occlusions and free- space violations. We also present an approximate alternative formulation that selects and validates only one hypothesis based on confidence. Both formulations enable us to perform video-based reconstruction at up to 25 frames per second. We show results on the multi-view stereo evaluation benchmark datasets and several outdoors video sequences. Extensive quantitative analysis is performed using an accurately surveyed model of a real building as ground truth."
]
} |
1512.03022 | 2286231031 | @math rounds has been a well known upper bound for rumor spreading using push&pull in the random phone call model (i.e., uniform gossip in the complete graph). A matching lower bound of @math is also known for this special case. Under the assumption of this model and with a natural addition that nodes can call a partner once they learn its address (e.g., its IP address) we present a new distributed, address-oblivious and robust algorithm that uses push&pull with pointer jumping to spread a rumor to all nodes in only @math rounds, w.h.p. This algorithm can also cope with @math node failures, in which case all but @math nodes become informed within @math rounds, w.h.p. | Beside the basic random phone call model, gossip algorithms and rumor spreading were generalized in several different ways. The basic extension was to study (i.e., the called partner is selected uniformly at random from the neighbors lists) on graphs other than the clique. Feige et. al. @cite_25 studied randomized broadcast in networks and extended the result of @math rounds for to different types of graphs like hypercubes and random graphs models. Following the work of @cite_31 , and in particular in recent years the protocol was studied intensively, both to give tight bounds for general graphs and to understand its performance advantages on specific families of graphs. A lower bound of @math for uniform gossip on the clique can be conclude from @cite_24 that studies the sequential case. We are not aware of a lower bound for general, address-oblivious . | {
"cite_N": [
"@cite_24",
"@cite_31",
"@cite_25"
],
"mid": [
"2149345630",
"2157004711",
"2047784567"
],
"abstract": [
"In this paper, we consider a popular randomized broadcasting algorithm called push-algorithm defined as follows. Initially, one vertex of a graph G=(V,E) owns a piece of information which is spread iteratively to all other vertices: in each timestep t=1,2,… every informed vertex chooses a neighbor uniformly at random and informs it. The question is how many time steps are required until all vertices become informed (with high probability). For various graph classes, involved methods have been developed in order to show an upper bound of @math on the runtime of the push-algorithm, where N is the number of vertices and @math denotes the diameter of G. However, no asymptotically tight bound on the runtime based on the mixing time of random walks has been established. In this work we fill this gap by deriving an upper bound of @math , where @math denotes the mixing time of a certain random walk on G. After that we prove upper bounds that are based on certain edge expansion properties of G. However, for hypercubes neither the bound based on the mixing time nor the bounds based on edge expansion properties are tight. That is why we develop a general way to combine these two approaches by which we can deduce that the runtime of the push-algorithm is Θ(log N) on every Hamming graph.",
"Investigates the class of epidemic algorithms that are commonly used for the lazy transmission of updates to distributed copies of a database. These algorithms use a simple randomized communication mechanism to ensure robustness. Suppose n players communicate in parallel rounds in each of which every player calls a randomly selected communication partner. In every round, players can generate rumors (updates) that are to be distributed among all players. Whenever communication is established between two players, each one must decide which of the rumors to transmit. The major problem is that players might not know which rumors their partners have already received. For example, a standard algorithm forwarding each rumor form the calling to the called players for spl Theta (ln n) rounds needs to transmit the rumor spl Theta (n ln n) times in order to ensure that every player finally receives the rumor with high probability. We investigate whether such a large communication overhead is inherent to epidemic algorithms. On the positive side, we show that the communication overhead can be reduced significantly. We give an algorithm using only O(n ln ln n) transmissions and O(ln n) rounds. In addition, we prove the robustness of this algorithm. On the negative side, we show that any address-oblivious algorithm needs to send spl Omega (n ln ln n) messages for each rumor, regardless of the number of rounds. Furthermore, we give a general lower bound showing that time and communication optimality cannot be achieved simultaneously using random phone calls, i.e. every algorithm that distributes a rumor in O(ln n) rounds needs spl omega (n) transmissions.",
"In this paper we study the rate at which a rumor spreads through an undirected graph. This study has two important applications in distributed computation: in simple, robust and efficient broadcast protocols, and in the maintenance of replicated databases."
]
} |
1512.03022 | 2286231031 | @math rounds has been a well known upper bound for rumor spreading using push&pull in the random phone call model (i.e., uniform gossip in the complete graph). A matching lower bound of @math is also known for this special case. Under the assumption of this model and with a natural addition that nodes can call a partner once they learn its address (e.g., its IP address) we present a new distributed, address-oblivious and robust algorithm that uses push&pull with pointer jumping to spread a rumor to all nodes in only @math rounds, w.h.p. This algorithm can also cope with @math node failures, in which case all but @math nodes become informed within @math rounds, w.h.p. | Another line of research was to study (as well as and separately) but not under the uniform gossip model. Censor- @cite_12 , gave an algorithm for all-to-all dissemination in arbitrary graphs which eliminates the dependency on the conductance. For unlimited message sizes (essentially you can send everything you know), their randomized algorithm informs all nodes in @math rounds where @math is the graph diameter; clearly this is tight for many graphs. Quasirandom rumor spreading was first offered by in @cite_4 @cite_27 and showed to outperform the randomize algorithms in some cases (see also @cite_32 for a study of the message complexity of quasirandom rumor spreading). Most recently Haeupler @cite_29 proposed a completely deterministic algorithm that spread a rumor with @math rounds (but also requires unlimited message size). | {
"cite_N": [
"@cite_4",
"@cite_29",
"@cite_32",
"@cite_27",
"@cite_12"
],
"mid": [
"2144828388",
"",
"1910512278",
"1771950282",
"2949310786"
],
"abstract": [
"We propose and analyse a quasirandom analogue to the classical push model for disseminating information in networks (\"randomized rumor spreading\"). In the classical model, in each round each informed node chooses a neighbor at random and informs it. Results of Frieze and Grimmett (Discrete Appl. Math. 1985) show that this simple protocol succeeds in spreading a rumor from one node of a complete graph to all others within O(log n) rounds. For the network being a hypercube or a random graph G(n, p) with p ≥ (1 +e)(log n) n, also O(log n) rounds suffice (Feige, Peleg. Raghavan, and Upfal, Random Struct. Algorithms 1990). In the quasirandom model, we assume that each node has a (cyclic) list of its neighbors. Once informed, it starts at a random position of the list, but from then on informs its neighbors in the order of the list. Surprisingly, irrespective of the orders of the lists, the above mentioned bounds still hold. In addition, we also show a O(log n) bound for sparsely connected random graphs G(n, p) with p = (log n + f(n)) n, where f(n) → ∞ and f(n) = O(log log n). Here, the classical model needs Θ(log2(n)) rounds. Hence the quasirandom model achieves similar or better broadcasting times with a greatly reduced use of random bits.",
"",
"We consider rumor spreading on random graphs and hypercubes in the quasirandom phone call model. In this model, every node has a list of neighbors whose order is specified by an adversary. In step i every node opens a channel to its ith neighbor (modulo degree) on that list, beginning from a randomly chosen starting position. Then, the channels can be used for bi-directional communication in that step. The goal is to spread a message efficiently to all nodes of the graph. We show three results. For random graphs (with sufficiently many edges) we present an address-oblivious algorithm with runtime O(log n) that uses at most O(n log log n) message transmissions. For hypercubes of dimension log n we present an address-oblivious algorithm with runtime O(log n) that uses at most O(n(log log n)2) message transmissions. For hypercubes we also show a lower bound of Ω(n log n log log n) on the total number of message transmissions required by any O(log n) time address-oblivious algorithm in the standard random phone call model. Together with a result of [8], our results imply that for random graphs and hypercubes the communication complexity of the quasirandom phone call model is significantly smaller than that of the standard phone call model. This seems to be surprising given the small amount of randomness used in our model.",
"Randomized rumor spreading is an efficient protocol to distribute information in networks. Recently, a quasirandom version has been proposed and proven to work equally well on many graphs and better for sparse random graphs. In this work we show three main results for the quasirandom rumor spreading model. We exhibit a natural expansion property for networks which suffices to make quasirandom rumor spreading inform all nodes of the network in logarithmic time with high probability. This expansion property is satisfied, among others, by many expander graphs, random regular graphs, and Erdős-Renyi random graphs. For all network topologies, we show that if one of the push or pull model works well, so does the other. We also show that quasirandom rumor spreading is robust against transmission failures. If each message sent out gets lost with probability f , then the runtime increases only by a factor of @math .",
"In this paper, we study the question of how efficiently a collection of interconnected nodes can perform a global computation in the widely studied GOSSIP model of communication. In this model, nodes do not know the global topology of the network, and they may only initiate contact with a single neighbor in each round. This model contrasts with the much less restrictive LOCAL model, where a node may simultaneously communicate with all of its neighbors in a single round. A basic question in this setting is how many rounds of communication are required for the information dissemination problem, in which each node has some piece of information and is required to collect all others. In this paper, we give an algorithm that solves the information dissemination problem in at most @math rounds in a network of diameter @math , withno dependence on the conductance. This is at most an additive polylogarithmic factor from the trivial lower bound of @math , which applies even in the LOCAL model. In fact, we prove that something stronger is true: any algorithm that requires @math rounds in the LOCAL model can be simulated in @math rounds in the GOSSIP model. We thus prove that these two models of distributed computation are essentially equivalent."
]
} |
1512.03022 | 2286231031 | @math rounds has been a well known upper bound for rumor spreading using push&pull in the random phone call model (i.e., uniform gossip in the complete graph). A matching lower bound of @math is also known for this special case. Under the assumption of this model and with a natural addition that nodes can call a partner once they learn its address (e.g., its IP address) we present a new distributed, address-oblivious and robust algorithm that uses push&pull with pointer jumping to spread a rumor to all nodes in only @math rounds, w.h.p. This algorithm can also cope with @math node failures, in which case all but @math nodes become informed within @math rounds, w.h.p. | The idea of first building a virtual structure (i.e.; topology control) and then do gossip on top of this structure is not novel and similar idea was presented by Melamed and Keidar @cite_1 . Another source of influence to our work was the work on pointer jumping with in the context of efficient construction of peer-to-peer networks @cite_17 and on computing minimum spanning tress @cite_15 . | {
"cite_N": [
"@cite_15",
"@cite_1",
"@cite_17"
],
"mid": [
"1987099434",
"1525949920",
"2047941103"
],
"abstract": [
"We consider a simple model for overlay networks, where all n processes are connected to all other processes, and each message contains at most O(log n) bits. For this model, we present a distributed algorithm which constructs a minimum-weight spanning tree in O(log log n) communication rounds, where in each round any process can send a message to every other process. If message size is @math for some @math , then the number of communication rounds is @math .",
"We present Araneola, a scalable reliable application-level multicast system for highly dynamic wide-area environments. Araneola supports multi-point to multi-point reliable communication in a fully distributed manner while incurring constant load on each node. For a tunable parameter k spl ges 3, Araneola constructs and dynamically maintains an overlay structure in which each node's degree is either k or k + 1, and roughly 90 of the nodes have degree k. Empirical evaluation shows that Araneola's overlay structure achieves three important mathematical properties of k-regular random graphs (i.e., random graphs in which each node has exactly k neighbors) with N nodes: (i) its diameter grows logarithmically with N; (ii) it is generally k-connected; and (iii) it remains highly connected following random removal of linear-size subsets of edges or nodes. The overlay is constructed at a very low cost: each join, leave, or failure is handled locally, and entails the sending of only about 3k messages in total. Given this overlay, Araneola disseminates multicast messages by gossiping over the overlay's links. We show that compared to a standard gossip-based multicast protocol, Araneola achieves substantial improvements in load, reliability, and latency. Finally, we present an extension to Araneola in which the basic overlay is enhanced with additional links chosen according to geographic proximity and available bandwidth. We show that this approach reduces the number of physical hops messages traverse without hurting the overlay's robustness.",
"We present a local random graph transformation for weakly connected multi-digraphs with regular out-degree which produces every such graph with equal probability. This operation, called Pointer-Push&Pull, changes only two neighboring edges. Such an operation is highly desirable for a peerto-peer network to establish and maintain well connected expander graphs as reliable and robust network backbone. The Pointer-Push&Pull operation can be used in parallel without central coordination and each operation involves only two peers which have to exchange two messages, each carrying the information of one edge only.We show that a series of random Pointer-Push&Pull operations eventually leads to a uniform probability distribution over all weakly connected out-regular multi-digraphs. Depending on the probabilities used in the operation this uniform probability distribution either refers to the set of all weakly connected out-regular multi-digraphs or to the set of all weakly connected out-regular edge-labeled multidigraphs. In multi-digraphs multiple edges or self-loops may occur. In an out-regular digraph each node has the same number of outgoing edges.For this, we investigate the Markov-Process defined by the Pointer-Push&Pull operation over the set of all weakly connected multi-digraphs. We show that a Pointer-Push&Pull operation -- although preserving weak connectivity only -- can reach every weakly connected multi-digraph. The main argument follows from the symmetry of the Markov-Process described by the Pointer-Push&Pull operation over the set of all weakly connected out-regular multi-digraphs."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.