aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1604.01529
2338144305
Committee scoring rules form a rich class of aggregators of voters' preferences for the purpose of selecting subsets of objects with desired properties, e.g., a shortlist of candidates for an interview, a representative collective body such as a parliament, or a set of locations for a set of public facilities. In the spirit of celebrated Young's characterization result that axiomatizes single-winner scoring rules, we provide an axiomatic characterization of multiwinner committee scoring rules. We show that committee scoring rules---despite forming a remarkably general class of rules---are characterized by the set of four standard axioms, anonymity, neutrality, consistency and continuity, and by one axiom specific to multiwinner rules which we call committee dominance. In the course of our proof, we develop several new notions and techniques. In particular, we introduce and axiomatically characterize multiwinner decision scoring rules, a class of rules that broadly generalizes the well-known majority relation.
@cite_7 has studied a family of multiwinner rules that are based on utility values of the alternatives instead of preference orders, and where these utilities are aggregated using ordered weighted average operators (OWA operators) of Yager @cite_2 . (The same class, but for approval-based utilities, first appeared in early works of the Danish polymath Thorvald N. Thiele @cite_21 and was later studied by Forest Simmons See the description in the overview of Kilgour @cite_0 . and Aziz et al. @cite_60 @cite_44 ). It is easy to express these OWA-based rules as committee scoring rules.
{ "cite_N": [ "@cite_7", "@cite_60", "@cite_21", "@cite_0", "@cite_44", "@cite_2" ], "mid": [ "2952061453", "2951884201", "", "967083285", "", "2060907774" ], "abstract": [ "We consider the following problem: There is a set of items (e.g., movies) and a group of agents (e.g., passengers on a plane); each agent has some intrinsic utility for each of the items. Our goal is to pick a set of @math items that maximize the total derived utility of all the agents (i.e., in our example we are to pick @math movies that we put on the plane's entertainment system). However, the actual utility that an agent derives from a given item is only a fraction of its intrinsic one, and this fraction depends on how the agent ranks the item among the chosen, available, ones. We provide a formal specification of the model and provide concrete examples and settings where it is applicable. We show that the problem is hard in general, but we show a number of tractability results for its natural special cases.", "We study computational aspects of three prominent voting rules that use approval ballots to elect multiple winners. These rules are satisfaction approval voting, proportional approval voting, and reweighted approval voting. We first show that computing the winner for proportional approval voting is NP-hard, closing a long standing open problem. As none of the rules are strategyproof, even for dichotomous preferences, we study various strategic aspects of the rules. In particular, we examine the computational complexity of computing a best response for both a single agent and a group of agents. In many settings, we show that it is NP-hard for an agent or agents to compute how best to vote given a fixed set of approval ballots from the other agents.", "", "Approval voting is a well-known voting procedure for single-winner elections. Voters approve of as many candidates as they like, and the candidate with the most approvals wins (Brams and Fishburn 1978, 1983, 2005). But Merrill and Nagel (1987) point out that there are many ways to aggregate approval votes to determine a winner, justifying a distinction between approval balloting, in which each voter submits a ballot that identifies the candidates the voter approves of, and approval voting, the procedure of ranking the candidates according to their total numbers of approvals.", "", "The author is primarily concerned with the problem of aggregating multicriteria to form an overall decision function. He introduces a type of operator for aggregation called an ordered weighted aggregation (OWA) operator and investigates the properties of this operator. The OWA's performance is found to be between those obtained using the AND operator, which requires all criteria to be satisfied, and the OR operator, which requires at least one criteria to be satisfied. >" ] }
1604.01692
2341557172
A currently successful approach to computational semantics is to represent words as embeddings in a machine-learned vector space. We present an ensemble method that combines embeddings produced by GloVe (, 2014) and word2vec (, 2013) with structured knowledge from the semantic networks ConceptNet (Speer and Havasi, 2012) and PPDB (, 2013), merging their information into a common representation with a large, multilingual vocabulary. The embeddings it produces achieve state-of-the-art performance on many word-similarity evaluations. Its score of @math on an evaluation of rare words (, 2013) is 16 higher than the previous best known system.
AutoExtend @cite_8 is a system with similar methods to ours: it extends word2vec embeddings to cover all the word senses and synsets of WordNet by propagating information over edges, thus combining distributional and structured data after the fact. The primary goal of AutoExtend is word sense disambiguation, and as such it is optimized for and evaluated on WSD tasks. Our ensemble aims to extend and improve a vocabulary of undisambiguated words, so there is no direct comparison between AutoExtend's results and ours.
{ "cite_N": [ "@cite_8" ], "mid": [ "2125786288" ], "abstract": [ "We present , a system to learn embeddings for synsets and lexemes. It is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The synset lexeme embeddings obtained live in the same vector space as the word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet as a lexical resource, but AutoExtend can be easily applied to other resources like Freebase. AutoExtend achieves state-of-the-art performance on word similarity and word sense disambiguation tasks." ] }
1604.01252
2342027869
In many image-related tasks, learning expressive and discriminative representations of images is essential, and deep learning has been studied for automating the learning of such representations. Some user-centric tasks, such as image recommendations, call for effective representations of not only images but also preferences and intents of users over images. Such representations are termed hybrid and addressed via a deep learning approach in this paper. We design a dual-net deep network, in which the two sub-networks map input images and preferences of users into a same latent semantic space, and then the distances between images and users in the latent space are calculated to make decisions. We further propose a comparative deep learning (CDL) method to train the deep network, using a pair of images compared against one user to learn the pattern of their relative distances. The CDL embraces much more training data than naive deep learning, and thus achieves superior performance than the latter, with no cost of increasing network complexity. Experimental results with real-world data sets for image recommendations have shown the proposed dual-net network and CDL greatly outperform other state-of-the-art image recommendation solutions.
In view of the limitation of hand-crafted image features such as those designed in @cite_3 @cite_26 @cite_9 @cite_14 , more and more research focuses on designing effective deep learning models to extract image representations automatically @cite_28 @cite_25 @cite_11 . Karpathy al @cite_11 proposes a supervised hashing method with deep learning architecture, followed by a stage of simultaneous learning of hash function and image representations. Furthermore, it is noticed that middle-layer outputs in deep learning models can be seamlessly utilized as image representations, though the deep network is not trained for that @cite_13 @cite_15 @cite_10 . For example, Krizhevsky al @cite_13 proposes a deep learning architecture to perform image classification, and the outputs of the 7th full-connection layer are also verified to be kind of robust image representations.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_28", "@cite_10", "@cite_9", "@cite_3", "@cite_15", "@cite_13", "@cite_25", "@cite_11" ], "mid": [ "2027922120", "1912570122", "1946093182", "1849277567", "2124386111", "2161969291", "1950136256", "2163605009", "1915485278", "2293824885" ], "abstract": [ "The traditional SPM approach based on bag-of-features (BoF) requires nonlinear classifiers to achieve good image classification performance. This paper presents a simple but effective coding scheme called Locality-constrained Linear Coding (LLC) in place of the VQ coding in traditional SPM. LLC utilizes the locality constraints to project each descriptor into its local-coordinate system, and the projected coordinates are integrated by max pooling to generate the final representation. With linear classifier, the proposed approach performs remarkably better than the traditional nonlinear SPM, achieving state-of-the-art performance on several benchmarks. Compared with the sparse coding strategy [22], the objective function used by LLC has an analytical solution. In addition, the paper proposes a fast approximated LLC method by first performing a K-nearest-neighbor search and then solving a constrained least square fitting problem, bearing computational complexity of O(M + K2). Hence even with very large codebooks, our system can still process multiple frames per second. This efficiency significantly adds to the practical values of LLC for real applications.", "Despite the importance of image representations such as histograms of oriented gradients and deep Convolutional Neural Networks (CNN), our theoretical understanding of them remains limited. Aiming at filling this gap, we investigate three key mathematical properties of representations: equivariance, invariance, and equivalence. Equivariance studies how transformations of the input image are encoded by the representation, invariance being a special case where a transformation has no effect. Equivalence studies whether two representations, for example two different parametrisations of a CNN, capture the same visual information or not. A number of methods to establish these properties empirically are proposed, including introducing transformation and stitching layers in CNNs. These methods are then applied to popular representations to reveal insightful aspects of their structure, including clarifying at which layers in a CNN certain geometric invariances are achieved. While the focus of the paper is theoretical, direct applications to structured-output regression are demonstrated too.", "The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkits. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6 to 36.8 for the TRECVID MEDTest 14 dataset and from 34.0 to 44.6 for the TRECVID MEDTest 13 dataset.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.", "Hashing is a popular approximate nearest neighbor search approach for large-scale image retrieval. Supervised hashing, which incorporates similarity dissimilarity information on entity pairs to improve the quality of hashing function learning, has recently received increasing attention. However, in the existing supervised hashing methods for images, an input image is usually encoded by a vector of handcrafted visual features. Such hand-crafted feature vectors do not necessarily preserve the accurate semantic similarities of images pairs, which may often degrade the performance of hashing function learning. In this paper, we propose a supervised hashing method for image retrieval, in which we automatically learn a good image representation tailored to hashing as well as a set of hash functions. The proposed method has two stages. In the first stage, given the pairwise similarity matrix S over training images, we propose a scalable coordinate descent method to decompose S into a product of HHT where H is a matrix with each of its rows being the approximate hash code associated to a training image. In the second stage, we propose to simultaneously learn a good feature representation for the input images as well as a set of hash functions, via a deep convolutional network tailored to the learned hash codes in H and optionally the discrete class labels of the images. Extensive empirical evaluations on three benchmark datasets with different kinds of images show that the proposed method has superior performance gains over several state-of-the-art supervised and unsupervised hashing methods." ] }
1604.01252
2342027869
In many image-related tasks, learning expressive and discriminative representations of images is essential, and deep learning has been studied for automating the learning of such representations. Some user-centric tasks, such as image recommendations, call for effective representations of not only images but also preferences and intents of users over images. Such representations are termed hybrid and addressed via a deep learning approach in this paper. We design a dual-net deep network, in which the two sub-networks map input images and preferences of users into a same latent semantic space, and then the distances between images and users in the latent space are calculated to make decisions. We further propose a comparative deep learning (CDL) method to train the deep network, using a pair of images compared against one user to learn the pattern of their relative distances. The CDL embraces much more training data than naive deep learning, and thus achieves superior performance than the latter, with no cost of increasing network complexity. Experimental results with real-world data sets for image recommendations have shown the proposed dual-net network and CDL greatly outperform other state-of-the-art image recommendation solutions.
Personalized recommendations for structured data such as book, movie, and music have been studied for a long while @cite_5 . Typical technologies include content-based filtering, collaborative filtering, and hybrid of both @cite_6 . However, it is difficult to directly adopt these technologies for image recommendations, possibly due to several difficulties: images are highly unstructured and lack an immediate representations, user-image interaction data are often too sparse, users rarely provide ratings on images but rather give implicit feedback. Nevertheless, mature technologies in recommender systems are still inspiring, for example, matrix factorization @cite_2 can be perceived as to learn latent representations of users and items in a same semantic space.
{ "cite_N": [ "@cite_5", "@cite_6", "@cite_2" ], "mid": [ "2171960770", "2137728971", "2054141820" ], "abstract": [ "This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations.", "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.", "As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels." ] }
1604.01252
2342027869
In many image-related tasks, learning expressive and discriminative representations of images is essential, and deep learning has been studied for automating the learning of such representations. Some user-centric tasks, such as image recommendations, call for effective representations of not only images but also preferences and intents of users over images. Such representations are termed hybrid and addressed via a deep learning approach in this paper. We design a dual-net deep network, in which the two sub-networks map input images and preferences of users into a same latent semantic space, and then the distances between images and users in the latent space are calculated to make decisions. We further propose a comparative deep learning (CDL) method to train the deep network, using a pair of images compared against one user to learn the pattern of their relative distances. The CDL embraces much more training data than naive deep learning, and thus achieves superior performance than the latter, with no cost of increasing network complexity. Experimental results with real-world data sets for image recommendations have shown the proposed dual-net network and CDL greatly outperform other state-of-the-art image recommendation solutions.
With the development of social networks, recent research starts to leverage social data to improve the performance of recommendations @cite_32 @cite_30 . Most of existing work on image recommendations also follows this line @cite_19 @cite_24 @cite_16 @cite_31 . For example, Jing al @cite_16 propose a novel probabilistic matrix factorization framework that combines the ratings of local community users for recommending Flickr photos. Cui al @cite_19 propose a regularized dual-factor regression method based on matrix factorization to capture the social attributes for recommendations. These methods ignore the visual information of images, instead, they focus solely on modeling users by discovering user profiles and behavior patterns. The representations of images and users are still isolated due to semantic gap and the sparsity of user-image interactions.
{ "cite_N": [ "@cite_30", "@cite_32", "@cite_24", "@cite_19", "@cite_31", "@cite_16" ], "mid": [ "1994226161", "2084527756", "2105397135", "2020600820", "1972062581", "2083602526" ], "abstract": [ "Exponential growth of information generated by online social networks demands effective recommender systems to give useful results. Traditional techniques become unqualified because they ignore social relation data; existing social recommendation approaches consider social network structure, but social contextual information has not been fully considered. It is significant and challenging to fuse social contextual factors which are derived from users’ motivation of social behaviors into social recommendation. In this paper, we investigate the social recommendation problem on the basis of psychology and sociology studies, which exhibit two important factors: individual preference and interpersonal influence. We first present the particular importance of these two factors in online behavior prediction. Then we propose a novel probabilistic matrix factorization method to fuse them in latent space. We conduct experiments on both Facebook style bidirectional and Twitter style unidirectional social network datasets. The empirical results and analysis on these two large datasets demonstrate that our method significantly outperforms the existing approaches.", "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.", "In this paper, we have developed a novel framework called JustClick to enable personalized image recommendation via exploratory search from large-scale collections of Flickr images. First, a topic network is automatically generated to summarize large-scale collections of Flickr images at a semantic level. Hyperbolic visualization is further used to enable interactive navigation and exploration of the topic network, so that users can gain insights of large-scale image collections at the first glance, build up their mental query models interactively and specify their queries (i.e., image needs) more precisely by selecting the image topics on the topic network directly. Thus, our personalized query recommendation framework can effectively address both the problem of query formulation and the problem of vocabulary discrepancy and null returns. Second, a small set of most representative images are recommended for the given image topic according to their representativeness scores. Kernel principal component analysis and hyperbolic visualization are seamlessly integrated to organize and layout the recommended images (i.e., most representative images) according to their nonlinear visual similarity contexts, so that users can assess the relevance between the recommended images and their real query intentions interactively. An interactive interface is implemented to allow users to express their time-varying query intentions precisely and to direct our JustClick system to more relevant images according to their personal preferences. Our experiments on large-scale collections of Flickr images show very positive results.", "Although video recommender systems have become the predominant way for people to obtain video information, their performances are far from satisfactory in that (1) the recommended videos are often mismatched with the users' interests and (2) the recommendation results are, in most cases, hardly understandable for users and therefore cannot persuade them to engage. In this paper, we attempt to address the above problems in data representation level, and aim to learn a common attributed representation for users and videos in social media with good interpretability, stability and an appropriate level of granularity. The basic idea is to represent videos with users' social attributes, and represent users with content attributes of videos, such that both users and videos can be represented in a common space concatenated by social attributes and content attributes. The video recommendation problem can then be converted into a similarity matching problem in the common space. However, it is still challenging to balance the roles of social attributes and content attributes, learn such a common representation in sparse user-video interactions and deal with the cold-start problem. In this paper, we propose a REgularized Dual-fActor Regression (REDAR) method based on matrix factorization. In this method, social attributes and content attributes are flexibly combined, and social and content information are effectively exploited to alleviate the sparsity problem. An incremental version of REDAR is designed to solve the cold-start problem. We extensively evaluate the proposed method for video recommendation application in real social network dataset, and the results show that, in most cases, the proposed method can achieve a relative improvement of more than 20 compared to state-of-the-art baseline methods.", "We introduce and investigate a novel problem of image recommendation for web search engine users. Modern web search engines have become a critical assistant for people's daily life. Through interacting with web search engines, users exhibit personalized information needs in various aspects. While this information is critical to improve user experience, it is mostly used only in the web search domain. In this paper, we propose to leverage web search engine users' behavior data to perform image recommendation. To this end, we have developed a two-stage method to label users' preferences for images through crowdsourcing techniques. The two-stage annotation consists of 1) inferring a user's general interests and 2) estimating if this user will be interested in an image. In addition, we implement a baseline algorithm to demonstrate the promise of the proposed cross-domain recommendation framework.", "Photo recommendation in photo-sharing social networks like Flickr is an important problem. Collaborative filtering is very popular, which assumes each item has the same weight for recommendation. In practice some items are representatives for a class of items and therefore are more important for recommendation. In this paper, we model the importance for items by examining sentiment from the general public towards items. Specifically we propose a model using the temporal dynamic user 'favor' information to infer photo importance on Flickr. It is further combined with local community user ratings to improve the Probabilistic Matrix Factorization (PMF) framework for photo recommendation. Experiment results show the effectiveness of the proposed approach." ] }
1604.01252
2342027869
In many image-related tasks, learning expressive and discriminative representations of images is essential, and deep learning has been studied for automating the learning of such representations. Some user-centric tasks, such as image recommendations, call for effective representations of not only images but also preferences and intents of users over images. Such representations are termed hybrid and addressed via a deep learning approach in this paper. We design a dual-net deep network, in which the two sub-networks map input images and preferences of users into a same latent semantic space, and then the distances between images and users in the latent space are calculated to make decisions. We further propose a comparative deep learning (CDL) method to train the deep network, using a pair of images compared against one user to learn the pattern of their relative distances. The CDL embraces much more training data than naive deep learning, and thus achieves superior performance than the latter, with no cost of increasing network complexity. Experimental results with real-world data sets for image recommendations have shown the proposed dual-net network and CDL greatly outperform other state-of-the-art image recommendation solutions.
Only a few recent work is concentrated on joint modeling of users and images for making recommendations @cite_8 @cite_17 @cite_1 . Sang al @cite_1 propose a topic sensitive model that concerns user preferences and user uploaded images to study users' influences in social networks. Liu al @cite_17 propose to recommend images by voting strategy according to learnt social embedded image representations. Till now, the existing methods often perform separate processing of user information and image and then simply combining them. A fully integrated solution is to be investigated.
{ "cite_N": [ "@cite_1", "@cite_17", "@cite_8" ], "mid": [ "2151751257", "2018773948", "" ], "abstract": [ "Social media is becoming popular these days, where user necessarily interacts with each other to form social networks. Influence network, as one special case of social network, has been recognized as significantly impacting social activities and user decisions. We emphasize in this paper that the inter-user influence is essentially topic-sensitive, as for different tasks users tend to trust different influencers and be influenced most by them. While existing research focuses on global influence modeling and applies to text-based networks, this work investigates the problem of topic-sensitive influence modeling in the multimedia domain. We propose a multi-modal probabilistic model, considering both users' textual annotation and uploaded visual image. This model is capable of simultaneously extracting user topic distributions and topic-sensitive influence strengths. By identifying the topic-sensitive influencer, we are able to conduct applications like collective search and collaborative recommendation. A risk minimization-based general framework for personalized image search is further presented, where the image search task is transferred to measure the distance of image and personalized query language models. The framework considers the noisy tag issue and enables easy incorporation of social influence. We have conducted experiments on a large-scale Flickr dataset. Qualitative as well as quantitative evaluation results have validated the effectiveness of the topic-sensitive influencer mining model, and demonstrated the advantage of incorporating topic-sensitive influence in personalized image search and topic-based image recommendation.", "Image distance (similarity) is a fundamental and important problem in image processing. However, traditional visual features based image distance metrics usually fail to capture human cognition. This paper presents a novel Social embedding Image Distance Learning (SIDL) approach to embed the similarity of collective social and behavioral information into visual space. The social similarity is estimated according to multiple social factors. Then a metric learning method is especially designed to learn the distance of visual features from the estimated social similarity. In this manner, we can evaluate the cognitive image distance based on the visual content of images. Comprehensive experiments are designed to investigate the effectiveness of SIDL, as well as the performance in the image recommendation and reranking tasks. The experimental results show that the proposed approach makes a marked improvement compared to the state-of-the-art image distance metrics. An interesting observation is given to show that the learned image distance can better reflect human cognition.", "" ] }
1604.01275
2342230329
The first generation of wireless sensor nodes have constrained energy resources and computational power, which discourages applications to process any task other than measuring and transmitting towards a central server. However, nowadays, sensor networks tend to be incorporated into the Internet of Things and the hardware evolution may change the old strategy of avoiding data computation in the sensor nodes. In this paper, we show the importance of reducing the number of transmissions in sensor networks and present the use of forecasting methods as a way of doing it. Experiments using real sensor data show that state-of-the-art forecasting methods can be successfully implemented in the sensor nodes to keep the quality of their measurements and reduce up to 30 of their transmissions, lowering the channel utilization. We conclude that there is an old paradigm that is no longer the most beneficial, which is the strategy of always transmitting a measurement when it differs by more than a threshold from the last one transmitted. Adopting more complex forecasting methods in the sensor nodes is the alternative to significantly reduce the number of transmissions without compromising the quality of their measurements, and therefore support the exponential growth of the Internet of Things.
More recently, the mechanism presented in @cite_33 computes the forecasting models in the sensor nodes before transmitting them to the . That is, the sensor nodes became completely autonomous and independent of the data computation made in the . On the other hand, the choice of the method is still restricted by the sensor nodes' memory and processing power limitations, which also narrows the range of situations in which it can be successfully adopted.
{ "cite_N": [ "@cite_33" ], "mid": [ "2037141347" ], "abstract": [ "One of the most challenging goals of many wireless sensor network (WSN) deployments is the reduction of energy consumption to extend system lifetime. This paper considers a novel combination of techniques that address energy savings at the hardware and application levels: wake-up receivers and node-level power management, plus prediction-based data collection. Individually, each technique can achieve significant energy savings, but in combination, the results are impressive. This paper presents a case study of these techniques as applied in a road tunnel for light monitoring. Preliminary results show the potential for two orders of magnitude reduction in power consumption. This savings of 380 times allows the creation of an energetically sustainable system by considering integration with a simple, photovoltaic energy harvester." ] }
1604.01275
2342230329
The first generation of wireless sensor nodes have constrained energy resources and computational power, which discourages applications to process any task other than measuring and transmitting towards a central server. However, nowadays, sensor networks tend to be incorporated into the Internet of Things and the hardware evolution may change the old strategy of avoiding data computation in the sensor nodes. In this paper, we show the importance of reducing the number of transmissions in sensor networks and present the use of forecasting methods as a way of doing it. Experiments using real sensor data show that state-of-the-art forecasting methods can be successfully implemented in the sensor nodes to keep the quality of their measurements and reduce up to 30 of their transmissions, lowering the channel utilization. We conclude that there is an old paradigm that is no longer the most beneficial, which is the strategy of always transmitting a measurement when it differs by more than a threshold from the last one transmitted. Adopting more complex forecasting methods in the sensor nodes is the alternative to significantly reduce the number of transmissions without compromising the quality of their measurements, and therefore support the exponential growth of the Internet of Things.
After observing the evolution of the sensor networks and the adoption of more complex prediction methods in the field, we noticed that there was a need to develop a broad study about the prediction algorithms from a statistical point of view that could serve as a reference for future developments in the integration of data prediction methods into sensor networks. Comparisons between the state-of-the-art algorithms have been also shown in the M3's competition results @cite_39 and, although they show that the forecasts perform well in several cases, they do not consider the limited data access and processing power inherent to the sensor nodes. In the next Section, we use sensor network data to present a comparison between the forecasting methods presented before under the constrained sensor nodes' perspective.
{ "cite_N": [ "@cite_39" ], "mid": [ "2048665112" ], "abstract": [ "This paper describes the M3-Competition, the latest of the M-Competitions. It explains the reasons for conducting the competition and summarizes its results and conclusions. In addition, the paper compares such results conclusions with those of the previous two M-Competitions as well as with those of other major empirical studies. Finally, the implications of these results and conclusions are considered, their consequences for both the theory and practice of forecasting are explored and directions for future research are contemplated." ] }
1604.01219
2336664606
Researchers often summarize their work in the form of posters. Posters provide a coherent and efficient way to convey core ideas from scientific papers. Generating a good scientific poster, however, is a complex and time consuming cognitive task, since such posters need to be readable, informative, and visually aesthetic. In this paper, for the first time, we study the challenging problem of learning to generate posters from scientific papers. To this end, a data-driven framework, that utilizes graphical models, is proposed. Specifically, given content to display, the key elements of a good poster, including panel layout and attributes of each panel, are learned and inferred from data. Then, given inferred layout and attributes, composition of graphical elements within each panel is synthesized. To learn and validate our model, we collect and make public a Poster-Paper dataset, which consists of scientific papers and corresponding posters with exhaustively labelled panels and attributes. Qualitative and quantitative results indicate the effectiveness of our approach.
Graphical design has been studied extensively in computer graphics community. This involves several related, yet different topics, including @cite_4 @cite_11 @cite_17 , @cite_0 @cite_3 , @cite_15 , @cite_2 @cite_5 , and even @cite_7 . Among them, text-based layout pays more attention on informativeness, while attractiveness also needs to be considered in poster generation. Other topics would take aesthetics as the highest priority. However, some principles (such as alignment or read-order) need to be followed in poster design. In summary, poster generation needs to consider readability, informativeness and aesthetics of the generated posters simultaneously.
{ "cite_N": [ "@cite_11", "@cite_4", "@cite_7", "@cite_3", "@cite_0", "@cite_2", "@cite_5", "@cite_15", "@cite_17" ], "mid": [ "2014740140", "2048183501", "2166252986", "2126476084", "2030073376", "2149846167", "2130634053", "2149611572", "2136384865" ], "abstract": [ "We present a new paradigm for automated document composition based on a generative, unified probabilistic document model (PDM) that models document composition. The model formally incorporates key design variables such as content pagination, relative arrangement possibilities for page elements and possible page edits. These design choices are modeled jointly as coupled random variables (a Bayesian Network) with uncertainty modeled by their probability distributions. The overall joint probability distribution for the network assigns higher probability to good design choices. Given this model, we show that the general document layout problem can be reduced to probabilistic inference over the Bayesian network. We show that the inference task may be accomplished efficiently, scaling linearly with the content in the best case. We provide a useful specialization of the general model and use it to illustrate the advantages of soft probabilistic encodings over hard one-way constraints in specifying design aesthetics.", "Grid-based page designs are ubiquitous in commercially printed publications, such as newspapers and magazines. Yet, to date, no one has invented a good way to easily and automatically adapt such designs to arbitrarily-sized electronic displays. The difficulty of generalizing grid-based designs explains the generally inferior nature of on-screen layouts when compared to their printed counterparts, and is arguably one of the greatest remaining impediments to creating on-line reading experiences that rival those of ink on paper. In this work, we present a new approach to adaptive grid-based document layout, which attempts to bridge this gap. In our approach, an adaptive layout style is encoded as a set of grid-based templates that know how to adapt to a range of page sizes and other viewing conditions. These templates include various types of layout elements (such as text, figures, etc.) and define, through constraint-based relationships, just how these elements are to be laid out together as a function of both the properties of the content itself, such as a figure's size and aspect ratio, and the properties of the viewing conditions under which the content is being displayed. We describe an XML-based representation for our templates and content, which maintains a clean separation between the two. We also describe the various parts of our research prototype system: a layout engine for formatting the page; a paginator for determining a globally optimal allocation of content amongst the pages, as well as an optimal pairing of templates with content; and a graphical user interface for interactively creating adaptive templates. We also provide numerous examples demonstrating the capabilities of this prototype, including this paper, itself, which has been laid out with our system.", "Decision-theoretic optimization is becoming a popular tool in the user interface community, but creating accurate cost (or utility) functions has become a bottleneck --- in most cases the numerous parameters of these functions are chosen manually, which is a tedious and error-prone process. This paper describes ARNAULD, a general interactive tool for eliciting user preferences concerning concrete outcomes and using this feedback to automatically learn a factored cost function. We empirically evaluate our machine learning algorithm and two automatic query generation approaches and report on an informal user study.", "A measure of aesthetics that has been used in automated layout is described. The approach combines heuristic measures of attributes that degrade the aesthetic quality. The combination is nonlinear so that one bad aesthetic feature can harm the overall score. Example heuristic measures are described for the features of alignment regularity separation balance white-space fraction white-space free flow proportion uniformity and page security.", "This paper presents an approach for automatically creating graphic design layouts using a new energy-based model derived from design principles. The model includes several new algorithms for analyzing graphic designs, including the prediction of perceived importance, alignment detection, and hierarchical segmentation. Given the model, we use optimization to synthesize new layouts for a variety of single-page graphic designs. Model parameters are learned with Nonlinear Inverse Optimization (NIO) from a small number of example layouts. To demonstrate our approach, we show results for applications including generating design layouts in various styles, retargeting designs to new sizes, and improving existing designs. We also compare our automatic results with designs created using crowdsourcing and show that our approach performs slightly better than novice designers.", "We present an interactive furniture layout system that assists users by suggesting furniture arrangements that are based on interior design guidelines. Our system incorporates the layout guidelines as terms in a density function and generates layout suggestions by rapidly sampling the density function using a hardware-accelerated Monte Carlo sampler. Our results demonstrate that the suggestion generation functionality measurably increases the quality of furniture arrangements produced by participants with no prior training in interior design.", "We present a system that automatically synthesizes indoor scenes realistically populated by a variety of furniture objects. Given examples of sensibly furnished indoor scenes, our system extracts, in advance, hierarchical and spatial relationships for various furniture objects, encoding them into priors associated with ergonomic factors, such as visibility and accessibility, which are assembled into a cost function whose optimization yields realistic furniture arrangements. To deal with the prohibitively large search space, the cost function is optimized by simulated annealing using a Metropolis-Hastings state search step. We demonstrate that our system can synthesize multiple realistic furniture arrangements and, through a perceptual study, investigate whether there is a significant difference in the perceived functionality of the automatically synthesized results relative to furniture arrangements produced by human designers.", "We describe a system that uses a genetic algorithm to interactively generate personalized album pages for visual content collections on the Internet. The system has three modules: preprocessing, page creation, and page layout. We focus on the details of the genetic algorithm used in the page-layout task.", "We review the literature on automatic document formatting with an emphasis on recent work in the field. One common way to frame document formatting is as a constrained optimization problem where decision variables encode element placement, constraints enforce required geometric relationships, and the objective function measures layout quality. We present existing research using this framework, describing the kind of optimization problem being solved and the basic optimization techniques used to solve it. Our review focuses on the formatting of primarily textual documents, including both micro- and macro-typographic concerns. We also cover techniques for automatic table layout. Related problems such as widget and diagram layout, as well as temporal layout issues that arise in multimedia documents are outside the scope of this review." ] }
1604.01219
2336664606
Researchers often summarize their work in the form of posters. Posters provide a coherent and efficient way to convey core ideas from scientific papers. Generating a good scientific poster, however, is a complex and time consuming cognitive task, since such posters need to be readable, informative, and visually aesthetic. In this paper, for the first time, we study the challenging problem of learning to generate posters from scientific papers. To this end, a data-driven framework, that utilizes graphical models, is proposed. Specifically, given content to display, the key elements of a good poster, including panel layout and attributes of each panel, are learned and inferred from data. Then, given inferred layout and attributes, composition of graphical elements within each panel is synthesized. To learn and validate our model, we collect and make public a Poster-Paper dataset, which consists of scientific papers and corresponding posters with exhaustively labelled panels and attributes. Qualitative and quantitative results indicate the effectiveness of our approach.
Several techniques have been studied to facilitate layout generation for western comics or manga. For example,, for example, @cite_19 @cite_13 , @cite_10 @cite_14 , and @cite_18 . For preview generation of comic episodes @cite_12 , both frame extraction and layout generation are considered. Other research areas, such as @cite_16 and @cite_8 also draw considerable attention. However, none of these methods can be directly used to generate scientific posters, which is the focus of this paper. Our panel layout generation is inspired by the recent work on Manga layout @cite_10 . We use a binary tree to represent the panel layout. By contrast, the manga Layout trains a Dirichlet distribution to sample a splitting configuration, and different Dirichlet distribution for each kind of instance need to be trained. Instead, we propose a recursive algorithm to search for the best splitting configuration along a tree.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_10", "@cite_19", "@cite_16", "@cite_13", "@cite_12" ], "mid": [ "2052404564", "2188210106", "", "1973733686", "1991926808", "2063166622", "2070323006", "2013491007" ], "abstract": [ "Picture subjects and text balloons are basic elements in comics, working together to propel the story forward. Japanese comics artists often leverage a carefully designed composition of subjects and balloons (generally referred to as panel elements) to provide a continuous and fluid reading experience. However, such a composition is hard to produce for people without the required experience and knowledge. In this paper, we propose an approach for novices to synthesize a composition of panel elements that can effectively guide the reader's attention to convey the story. Our primary contribution is a probabilistic graphical model that describes the relationships among the artist's guiding path, the panel elements, and the viewer attention, which can be effectively learned from a small set of existing manga pages. We show that the proposed approach can measurably improve the readability, visual appeal, and communication of the story of the resulting pages, as compared to an existing method. We also demonstrate that the proposed approach enables novice users to create higher-quality compositions with less time, compared with commercially available programs.", "We introduce in this paper a new approach that conveniently converts conversational videos into comics with manga-style layout. With our approach, the manga-style layout of a comic page is achieved in a content-driven manner, and the main components, including panels and word balloons, that constitute a visually pleasing comic page are intelligently organized . Our approach extracts key frames on speakers by using a speaker detection technique such that word balloons can be placed near the corresponding speakers. We qualitatively measure the information contained in a comic page. With the initial layout automatically determined, the final comic page is obtained by maximizing such a measure and optimizing the parameters relating to the optimal display of comics. An efficient Markov chain Monte Carlo sampling algorithm is designed for the optimization. Our user study demonstrates that users much prefer our manga-style comics to purely Western style comics. Extensive experiments and comparisons against previous work also verify the effectiveness of our approach.", "", "Manga layout is a core component in manga production, characterized by its unique styles. However, stylistic manga layouts are difficult for novices to produce as it requires hands-on experience and domain knowledge. In this paper, we propose an approach to automatically generate a stylistic manga layout from a set of input artworks with user-specified semantics, thus allowing less-experienced users to create high-quality manga layouts with minimal efforts. We first introduce three parametric style models that encode the unique stylistic aspects of manga layouts, including layout structure, panel importance, and panel shape. Next, we propose a two-stage approach to generate a manga layout: 1) an initial layout is created that best fits the input artworks and layout structure model, according to a generative probabilistic framework; 2) the layout and artwork geometries are jointly refined using an efficient optimization procedure, resulting in a professional-looking manga layout. Through a user study, we demonstrate that our approach enables novice users to easily and quickly produce higher-quality layouts that exhibit realistic manga styles, when compared to a commercially-available manual layout tool.", "A method for automatic e-comic scene frame extraction is proposed for displaying large scale of comic scenes onto relatively small size of display screen such as mobile devices. In line with the rapid development of mobile devices, reading comic in small screen mobile devices is also demanded and required. The challenge in providing e-comics for small screen is how to separate comic scene frames and display it in the right order to read. We propose Automatic E-Comic Frame Extraction (ACFE) for extraction of scene frames from a digital comic page automatically. ACFE is based on the blob extraction method using connected component labeling algorithm, together with a filter combination pre-processing and efficient method for detection of line between frames. Experimental results show that 91.483 percent of 634 pages in 5 digital comics are successfully extracted into scene frames by the proposed method.", "The existing content aware image retargeting methods are mainly suitable for natural images, and do not perform well on line drawings. Such methods tend to regard homogeneous areas as less important. For line drawings, this is not always true.", "Automatically extracting frames panels from digital comic pages is crucial for techniques that facilitate comic reading on mobile devices with limited display areas. However, automatic panel extraction for manga, i.e., Japanese comics, can be especially challenging, largely because of its complex panel layout design mixed with various visual symbols throughout the page. In this paper, we propose a robust method for automatically extracting panels from digital manga pages. Our method first extracts the panel block by closing open panels and identifying a page background mask. It then performs a recursive binary splitting to partition the panel block into a set of sub-blocks, where an optimal splitting line at each recursive level is determined adaptively.", "This research proposes a novel method to present \"thumbnails\" of episodes of digitized comics, in order to improve the efficiency of comic search. Comic episode thumbnails are generated based on image analysis technologies developed especially for comic images. Namely, the following procedures are developed for our system: automatic comic frame segmentation, text balloon extraction, and a linear regression based model to calculate the importance score of each extracted frame. The system then selects frames from each episode with high importance score, and aligns the selected frames to create the episode thumbnail, which is presented to the system user as a compact preview of the episode. User experiments conducted with actual Japanese comic images prove that the proposed method significantly decreases the time necessary to search for specific episodes from a large scaled comic data collection." ] }
1604.01303
2337348744
There is an obvious trend that more and more data and computation are migrating into networks nowadays. Combining mature virtualization technologies with service-centric net- working, we are entering into an era where countless services reside in an ISP network to provide low-latency access. Such services are often computation intensive and are dynamically created and destroyed on demands everywhere in the network to perform various tasks. Consequently, these ephemeral in-network services introduce a new type of congestion in the network which we refer to as "computation congestion". The service load need to be effectively distributed on different nodes in order to maintain the funtionality and responsiveness of the network, which calls for a new design rather than reusing the centralised scheduler designed for cloud-based services. In this paper, we study both passive and proactive control strategies, based on the proactive control we further propose a fully distributed solution which is low complexity, adaptive, and responsive to network dynamics.
Load balancing, scheduling, and resource management are classic problems in high-performance computing (HPC) cluster. The topic gained lots of attention recently due to the popularity of cloud computing, virtualization, big data framework. Fully centralised control @cite_25 @cite_22 is a popular solution at the moment, and control theory has been shown as an effective tool to dynamically allocate resources @cite_24 . As mentioned, there are distinctive differences between a cloud environment and an ISP edge network regarding its stability, homogeneous configuration, regular topology, and etc. Most jobs execute for a longer period and often access a lot of data, hence can tolerate long scheduling delay.
{ "cite_N": [ "@cite_24", "@cite_22", "@cite_25" ], "mid": [ "2104726628", "", "2163961697" ], "abstract": [ "Resource management of virtualized servers in data centers has become a critical task, since it enables cost-effective consolidation of server applications. Resource management is an important and challenging task, especially for multitier applications with unpredictable time-varying workloads. Work in resource management using control theory has shown clear benefits of dynamically adjusting resource allocations to match fluctuating workloads. However, little work has been done toward adaptive controllers for unknown workload types. This work presents a new resource management scheme that incorporates the Kalman filter into feedback controllers to dynamically allocate CPU resources to virtual machines hosting server applications. We present a set of controllers that continuously detect and self-adapt to unforeseen workload changes. Furthermore, our most advanced controller also self-configures itself without any a priori information and with a small 4.8p performance penalty in the case of high-intensity workload changes. In addition, our controllers are enhanced to deal with multitier server applications: by using the pair-wise resource coupling between tiers, they improve server response to large workload increases as compared to controllers with no such resource-coupling mechanism. Our approaches are evaluated and their performance is illustrated on a 3-tier Rubis benchmark website deployed on a prototype Xen-virtualized cluster.", "", "We present Mesos, a platform for sharing commodity clusters between multiple diverse cluster computing frameworks, such as Hadoop and MPI. Sharing improves cluster utilization and avoids per-framework data replication. Mesos shares resources in a fine-grained manner, allowing frameworks to achieve data locality by taking turns reading data stored on each machine. To support the sophisticated schedulers of today's frameworks, Mesos introduces a distributed two-level scheduling mechanism called resource offers. Mesos decides how many resources to offer each framework, while frameworks decide which resources to accept and which computations to run on them. Our results show that Mesos can achieve near-optimal data locality when sharing the cluster among diverse frameworks, can scale to 50,000 (emulated) nodes, and is resilient to failures." ] }
1604.01105
2288295986
Many online social networks thrive on automatic sharing of friends' activities to a user through activity feeds, which may influence the user's next actions. However, identifying such social influence is tricky because these activities are simultaneously impacted by influence and homophily. We propose a statistical procedure that uses commonly available network and observational data about people's actions to estimate the extent of copy-influence---mimicking others' actions that appear in a feed. We assume that non-friends don't influence users; thus, comparing how a user's activity correlates with friends versus non-friends who have similar preferences can help tease out the effect of copy-influence. Experiments on datasets from multiple social networks show that estimates that don't account for homophily overestimate copy-influence by varying, often large amounts. Further, copy-influence estimates fall below 1 of total actions in all networks: most people, and almost all actions, are not affected by the feed. Our results question common perceptions around the extent of copy-influence in online social networks and suggest improvements to diffusion and recommendation models.
Mimicry of others' actions has been demonstrated in controlled experiments. This can happen at a population level, as with 's demonstration that making popularity visible causes different songs to become popular compared to the case when no such information is shown @cite_9 . It may be even stronger at the individual level; Sharma and Cosley showed that trusted friends' opinions have more influence on people's willingness to try unknown items than item popularity @cite_46 . Similar results have been shown in experiments on online social networks: a user is more likely to share a link on Facebook if it appears in her social feed @cite_26 and to click on an ad if it is endorsed by her friends @cite_28 .
{ "cite_N": [ "@cite_46", "@cite_28", "@cite_9", "@cite_26" ], "mid": [ "1750205245", "2161116977", "2147453867", "2027135291" ], "abstract": [ "Recommender systems associated with social networks often use social explanations (e.g. \"X, Y and 2 friends like this\") to support the recommendations. We present a study of the effects of these social explanations in a music recommendation context. We start with an experiment with 237 users, in which we show explanations with varying levels of social information and analyze their effect on users' decisions. We distinguish between two key decisions: the likelihood of checking out the recommended artist, and the actual rating of the artist based on listening to several songs. We find that while the explanations do have some influence on the likelihood, there is little correlation between the likelihood and actual (listening) rating for the same artist. Based on these insights, we present a generative probabilistic model that explains the interplay between explanations and background information on music preferences, and how that leads to a final likelihood rating for an artist. Acknowledging the impact of explanations, we discuss a general recommendation framework that models external informational elements in the recommendation interface, in addition to inherent preferences of users.", "Social advertising uses information about consumers' peers, including peer affiliations with a brand, product, organization, etc., to target ads and contextualize their display. This approach can increase ad efficacy for two main reasons: peers' affiliations reflect unobserved consumer characteristics, which are correlated along the social network; and the inclusion of social cues (i.e., peers' association with a brand) alongside ads affect responses via social influence processes. For these reasons, responses may be increased when multiple social signals are presented with ads, and when ads are affiliated with peers who are strong, rather than weak, ties. We conduct two very large field experiments that identify the effect of social cues on consumer responses to ads, measured in terms of ad clicks and the formation of connections with the advertised entity. In the first experiment, we randomize the number of social cues present in word-of-mouth advertising, and measure how responses increase as a function of the number of cues. The second experiment examines the effect of augmenting traditional ad units with a minimal social cue (i.e., displaying a peer's affiliation below an ad in light grey text). On average, this cue causes significant increases in ad performance. Using a measurement of tie strength based on the total amount of communication between subjects and their peers, we show that these influence effects are greatest for strong ties. Our work has implications for ad optimization, user interface design, and central questions in social science research.", "Hit songs, books, and movies are many times more successful than average, suggesting that “the best” alternatives are qualitatively different from “the rest”; yet experts routinely fail to predict which products will succeed. We investigated this paradox experimentally, by creating an artificial “music market” in which 14,341 participants downloaded previously unknown songs either with or without knowledge of previous participants9 choices. Increasing the strength of social influence increased both inequality and unpredictability of success. Success was also only partly determined by quality: The best songs rarely did poorly, and the worst rarely did well, but any other result was possible.", "Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these mediums on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed." ] }
1604.01105
2288295986
Many online social networks thrive on automatic sharing of friends' activities to a user through activity feeds, which may influence the user's next actions. However, identifying such social influence is tricky because these activities are simultaneously impacted by influence and homophily. We propose a statistical procedure that uses commonly available network and observational data about people's actions to estimate the extent of copy-influence---mimicking others' actions that appear in a feed. We assume that non-friends don't influence users; thus, comparing how a user's activity correlates with friends versus non-friends who have similar preferences can help tease out the effect of copy-influence. Experiments on datasets from multiple social networks show that estimates that don't account for homophily overestimate copy-influence by varying, often large amounts. Further, copy-influence estimates fall below 1 of total actions in all networks: most people, and almost all actions, are not affected by the feed. Our results question common perceptions around the extent of copy-influence in online social networks and suggest improvements to diffusion and recommendation models.
Outside of controlled experiments around information consumption in real social networks---which may be infeasible, costly, or potentially unethical---identification of influence is not straightforward. Naive measures such as simply counting the number of common actions between friends within a given time period likely overestimate influence @cite_25 @cite_40 @cite_24 , as any observed data is simultaneously affected by both influence and homophily. In addition, people may be to the same item through forces outside of the social network @cite_6 . For example, two friends may like the same item after watching an advertisement for it. Such external exposure also creates confounds for estimating influence. In line with Shalizi and Thomas' argument for the need for assumptions about mechanism @cite_35 , work on estimating influence from observational data generally makes assumptions about the nature of influence and or homophily, then uses statistical or computational procedures on those data to simulate contexts where those assumptions don't hold.
{ "cite_N": [ "@cite_35", "@cite_6", "@cite_24", "@cite_40", "@cite_25" ], "mid": [ "2149084727", "2128891547", "2046989790", "1573308170", "1967579779" ], "abstract": [ "The authors consider processes on social networks that can potentially involve three factors: homophily, or the formation of social ties due to matching individual traits; social contagion, also known as social influence; and the causal effect of an individual’s covariates on his or her behavior or other measurable responses. The authors show that generically, all of these are confounded with each other. Distinguishing them from one another requires strong assumptions on the parametrization of the social process or on the adequacy of the covariates used (or both). In particular the authors demonstrate, with simple examples, that asymmetries in regression coefficients cannot identify causal effects and that very simple models of imitation (a form of social contagion) can produce substantial correlations between an individual’s enduring traits and his or her choices, even when there is no intrinsic affinity between them. The authors also suggest some possible constructive responses to these results.", "A fundamental open question in the analysis of social networks is to understand the interplay between similarity and social ties. People are similar to their neighbors in a social network for two distinct reasons: first, they grow to resemble their current friends due to social influence; and second, they tend to form new links to others who are already like them, a process often termed selection by sociologists. While both factors are present in everyday social processes, they are in tension: social influence can push systems toward uniformity of behavior, while selection can lead to fragmentation. As such, it is important to understand the relative effects of these forces, and this has been a challenge due to the difficulty of isolating and quantifying them in real settings. We develop techniques for identifying and modeling the interactions between social influence and selection, using data from online communities where both social interaction and changes in behavior over time can be measured. We find clear feedback effects between the two factors, with rising similarity between two individuals serving, in aggregate, as an indicator of future interaction -- but with similarity then continuing to increase steadily, although at a slower rate, for long periods after initial interactions. We also consider the relative value of similarity and social influence in modeling future behavior. For instance, to predict the activities that an individual is likely to do next, is it more useful to know the current activities of their friends, or of the people most similar to them?", "Conformity is a type of social influence involving a change in opinion or behavior in order to fit in with a group. Employing several social networks as the source for our experimental data, we study how the effect of conformity plays a role in changing users' online behavior. We formally define several major types of conformity in individual, peer, and group levels. We propose Confluence model to formalize the effects of social conformity into a probabilistic model. Confluence can distinguish and quantify the effects of the different types of conformities. To scale up to large social networks, we propose a distributed learning method that can construct the Confluence model efficiently with near-linear speedup. Our experimental results on four different types of large social networks, i.e., Flickr, Gowalla, Weibo and Co-Author, verify the existence of the conformity phenomena. Leveraging the conformity information, Confluence can accurately predict actions of users. Our experiments show that Confluence significantly improves the prediction accuracy by up to 5-10 compared with several alternative methods.", "Who are the influential people in an online social network? The answer to this question depends not only on the structure of the network, but also on details of the dynamic processes occurring on it. We classify these processes as conservative and non-conservative. A random walk on a network is an example of a conservative dynamic process, while information spread is non-conservative. The influence models used to rank network nodes can be similarly classified, depending on the dynamic process they implicitly emulate. We claim that in order to correctly rank network nodes, the influence model has to match the details of the dynamic process. We study a real-world network on the social news aggregator Digg, which allows users to post and vote for news stories. We empirically define influence as the number of in-network votes a user's post generates. This influence measure, and the resulting ranking, arises entirely from the dynamics of voting on Digg, which represents non-conservative information flow. We then compare predictions of different influence models with this empirical estimate of influence. The results show that non-conservative models are better able to predict influential users on Digg. We find that normalized alpha-centrality metric turns out to be one of the best predictors of influence. We also present a simple algorithm for computing this metric and the associated mathematical formulation and analytical proofs.", "In this paper we investigate the attributes and relative influence of 1.6M Twitter users by tracking 74 million diffusion events that took place on the Twitter follower graph over a two month interval in 2009. Unsurprisingly, we find that the largest cascades tend to be generated by users who have been influential in the past and who have a large number of followers. We also find that URLs that were rated more interesting and or elicited more positive feelings by workers on Mechanical Turk were more likely to spread. In spite of these intuitive results, however, we find that predictions of which particular user or URL will generate large cascades are relatively unreliable. We conclude, therefore, that word-of-mouth diffusion can only be harnessed reliably by targeting large numbers of potential influencers, thereby capturing average effects. Finally, we consider a family of hypothetical marketing strategies, defined by the relative cost of identifying versus compensating potential \"influencers.\" We find that although under some circumstances, the most influential users are also the most cost-effective, under a wide range of plausible assumptions the most cost-effective performance can be realized using \"ordinary influencers\"---individuals who exert average or even less-than-average influence." ] }
1604.01105
2288295986
Many online social networks thrive on automatic sharing of friends' activities to a user through activity feeds, which may influence the user's next actions. However, identifying such social influence is tricky because these activities are simultaneously impacted by influence and homophily. We propose a statistical procedure that uses commonly available network and observational data about people's actions to estimate the extent of copy-influence---mimicking others' actions that appear in a feed. We assume that non-friends don't influence users; thus, comparing how a user's activity correlates with friends versus non-friends who have similar preferences can help tease out the effect of copy-influence. Experiments on datasets from multiple social networks show that estimates that don't account for homophily overestimate copy-influence by varying, often large amounts. Further, copy-influence estimates fall below 1 of total actions in all networks: most people, and almost all actions, are not affected by the feed. Our results question common perceptions around the extent of copy-influence in online social networks and suggest improvements to diffusion and recommendation models.
presented an edge-reversal test, where if person A has an edge to B but not vice-versa, then comparing B's influence on A with that of A on B would give us a measure of influence due to the directed edge @cite_34 . This is based on the intuition that influence should flow only one way on directed edges; it also has the advantage of strongly addressing the effect of homophily. This test was used to examine whether becoming obese is contagious and found that influence is significantly higher in the direction of the directed edge than the opposite direction. In online contexts like Twitter with asymmetric information flows, this could fit well, but in networks with undirected edges, it is not so useful.
{ "cite_N": [ "@cite_34" ], "mid": [ "2118745042" ], "abstract": [ "Background The prevalence of obesity has increased substantially over the past 30 years. We performed a quantitative analysis of the nature and extent of the person-to-person spread of obesity as a possible factor contributing to the obesity epidemic. Methods We evaluated a densely interconnected social network of 12,067 people assessed repeatedly from 1971 to 2003 as part of the Framingham Heart Study. The bodymass index was available for all subjects. We used longitudinal statistical models to examine whether weight gain in one person was associated with weight gain in his or her friends, siblings, spouse, and neighbors. Results Discernible clusters of obese persons (body-mass index [the weight in kilograms divided by the square of the height in meters], ≥30) were present in the network at all time points, and the clusters extended to three degrees of separation. These clusters did not appear to be solely attributable to the selective formation of social ties among obese persons. A person’s chances of becoming obese increased by 57 (95 confidence interval [CI], 6 to 123) if he or she had a friend who became obese in a given interval. Among pairs of adult siblings, if one sibling became obese, the chance that the other would become obese increased by 40 (95 CI, 21 to 60). If one spouse became obese, the likelihood that the other spouse would become obese increased by 37 (95 CI, 7 to 73). These effects were not seen among neighbors in the immediate geographic location. Persons of the same sex had relatively greater influence on each other than those of the opposite sex. The spread of smoking cessation did not account for the spread of obesity in the network. Conclusions Network phenomena appear to be relevant to the biologic and behavioral trait of obesity, and obesity appears to spread through social ties. These findings have implications for clinical and public health interventions." ] }
1604.01105
2288295986
Many online social networks thrive on automatic sharing of friends' activities to a user through activity feeds, which may influence the user's next actions. However, identifying such social influence is tricky because these activities are simultaneously impacted by influence and homophily. We propose a statistical procedure that uses commonly available network and observational data about people's actions to estimate the extent of copy-influence---mimicking others' actions that appear in a feed. We assume that non-friends don't influence users; thus, comparing how a user's activity correlates with friends versus non-friends who have similar preferences can help tease out the effect of copy-influence. Experiments on datasets from multiple social networks show that estimates that don't account for homophily overestimate copy-influence by varying, often large amounts. Further, copy-influence estimates fall below 1 of total actions in all networks: most people, and almost all actions, are not affected by the feed. Our results question common perceptions around the extent of copy-influence in online social networks and suggest improvements to diffusion and recommendation models.
Another approach by randomizes the order of people's actions to remove causal links due to temporality between A's and B's actions @cite_44 . In the absence of influence, the expected probability of a user acting upon an item given that some number of their friends have already done so---called @math -exposure @cite_41 @cite_6 ---should be the same in the observed data and the time-shuffled data. If it is statistically different, influence is in play. The method was used on a Flickr dataset and showed no significant difference between the two worlds. Still, the authors do not rule out influence, giving examples from their dataset that demonstrate influence effects, and note that their method is unable to make individual-level influence estimates.
{ "cite_N": [ "@cite_44", "@cite_41", "@cite_6" ], "mid": [ "2109742433", "2145446394", "2128891547" ], "abstract": [ "In many online social systems, social ties between users play an important role in dictating their behavior. One of the ways this can happen is through social influence, the phenomenon that the actions of a user can induce his her friends to behave in a similar way. In systems where social influence exists, ideas, modes of behavior, or new technologies can diffuse through the network like an epidemic. Therefore, identifying and understanding social influence is of tremendous interest from both analysis and design points of view. This is a difficult task in general, since there are factors such as homophily or unobserved confounding variables that can induce statistical correlation between the actions of friends in a social network. Distinguishing influence from these is essentially the problem of distinguishing correlation from causality, a notoriously hard statistical problem. In this paper we study this problem systematically. We define fairly general models that replicate the aforementioned sources of social correlation. We then propose two simple tests that can identify influence as a source of social correlation when the time series of user actions is available. We give a theoretical justification of one of the tests by proving that with high probability it succeeds in ruling out influence in a rather general model of social correlation. We also simulate our tests on a number of examples designed by randomly generating actions of nodes on a real social network (from Flickr) according to one of several models. Simulation results confirm that our test performs well on these data. Finally, we apply them to real tagging data on Flickr, exhibiting that while there is significant social correlation in tagging behavior on this system, this correlation cannot be attributed to social influence.", "There is a widespread intuitive sense that different kinds of information spread differently on-line, but it has been difficult to evaluate this question quantitatively since it requires a setting where many different kinds of information spread in a shared environment. Here we study this issue on Twitter, analyzing the ways in which tokens known as hashtags spread on a network defined by the interactions among Twitter users. We find significant variation in the ways that widely-used hashtags on different topics spread. Our results show that this variation is not attributable simply to differences in \"stickiness,\" the probability of adoption based on one or more exposures, but also to a quantity that could be viewed as a kind of \"persistence\" - the relative extent to which repeated exposures to a hashtag continue to have significant marginal effects. We find that hashtags on politically controversial topics are particularly persistent, with repeated exposures continuing to have unusually large marginal effects on adoption; this provides, to our knowledge, the first large-scale validation of the \"complex contagion\" principle from sociology, which posits that repeated exposures to an idea are particularly crucial when the idea is in some way controversial or contentious. Among other findings, we discover that hashtags representing the natural analogues of Twitter idioms and neologisms are particularly non-persistent, with the effect of multiple exposures decaying rapidly relative to the first exposure. We also study the subgraph structure of the initial adopters for different widely-adopted hashtags, again finding structural differences across topics. We develop simulation-based and generative models to analyze how the adoption dynamics interact with the network structure of the early adopters on which a hashtag spreads.", "A fundamental open question in the analysis of social networks is to understand the interplay between similarity and social ties. People are similar to their neighbors in a social network for two distinct reasons: first, they grow to resemble their current friends due to social influence; and second, they tend to form new links to others who are already like them, a process often termed selection by sociologists. While both factors are present in everyday social processes, they are in tension: social influence can push systems toward uniformity of behavior, while selection can lead to fragmentation. As such, it is important to understand the relative effects of these forces, and this has been a challenge due to the difficulty of isolating and quantifying them in real settings. We develop techniques for identifying and modeling the interactions between social influence and selection, using data from online communities where both social interaction and changes in behavior over time can be measured. We find clear feedback effects between the two factors, with rising similarity between two individuals serving, in aggregate, as an indicator of future interaction -- but with similarity then continuing to increase steadily, although at a slower rate, for long periods after initial interactions. We also consider the relative value of similarity and social influence in modeling future behavior. For instance, to predict the activities that an individual is likely to do next, is it more useful to know the current activities of their friends, or of the people most similar to them?" ] }
1604.01105
2288295986
Many online social networks thrive on automatic sharing of friends' activities to a user through activity feeds, which may influence the user's next actions. However, identifying such social influence is tricky because these activities are simultaneously impacted by influence and homophily. We propose a statistical procedure that uses commonly available network and observational data about people's actions to estimate the extent of copy-influence---mimicking others' actions that appear in a feed. We assume that non-friends don't influence users; thus, comparing how a user's activity correlates with friends versus non-friends who have similar preferences can help tease out the effect of copy-influence. Experiments on datasets from multiple social networks show that estimates that don't account for homophily overestimate copy-influence by varying, often large amounts. Further, copy-influence estimates fall below 1 of total actions in all networks: most people, and almost all actions, are not affected by the feed. Our results question common perceptions around the extent of copy-influence in online social networks and suggest improvements to diffusion and recommendation models.
Instead of removing influence, the other main strategy is to control for homophily. For instance, La Fond and Neville use shuffling of social network edges to estimate influence given snapshots of people's network and activity data at two points in time @cite_38 . They first calculate the average overlap in activity between friends. To control for homophily, they subtract the overlap in friends' activity after randomizing the edges between people. To control for external exposure, they also subtract out the overlap among friends when both edges and actions are randomized. Any difference that one finds then, must be due to influence. Using data on Facebook groups joined by people at two timestamps a year apart, they found that the relative effect of influence varies: some groups exhibited a significant influence effect, while others exhibited a significant homophily effect.
{ "cite_N": [ "@cite_38" ], "mid": [ "2170413097" ], "abstract": [ "Relational autocorrelation is ubiquitous in relational domains. This observed correlation between class labels of linked instances in a network (e.g., two friends are more likely to share political beliefs than two randomly selected people) can be due to the effects of two different social processes. If social influence effects are present, instances are likely to change their attributes to conform to their neighbor values. If homophily effects are present, instances are likely to link to other individuals with similar attribute values. Both these effects will result in autocorrelated attribute values. When analyzing static relational networks it is impossible to determine how much of the observed correlation is due each of these factors. However, the recent surge of interest in social networks has increased the availability of dynamic network data. In this paper, we present a randomization technique for temporal network data where the attributes and links change over time. Given data from two time steps, we measure the gain in correlation and assess whether a significant portion of this gain is due to influence and or homophily. We demonstrate the efficacy of our method on semi-synthetic data and then apply the method to a real-world social networks dataset, showing the impact of both influence and homophily effects." ] }
1604.01105
2288295986
Many online social networks thrive on automatic sharing of friends' activities to a user through activity feeds, which may influence the user's next actions. However, identifying such social influence is tricky because these activities are simultaneously impacted by influence and homophily. We propose a statistical procedure that uses commonly available network and observational data about people's actions to estimate the extent of copy-influence---mimicking others' actions that appear in a feed. We assume that non-friends don't influence users; thus, comparing how a user's activity correlates with friends versus non-friends who have similar preferences can help tease out the effect of copy-influence. Experiments on datasets from multiple social networks show that estimates that don't account for homophily overestimate copy-influence by varying, often large amounts. Further, copy-influence estimates fall below 1 of total actions in all networks: most people, and almost all actions, are not affected by the feed. Our results question common perceptions around the extent of copy-influence in online social networks and suggest improvements to diffusion and recommendation models.
Another way to control for homophily is to directly account for its indicators. If we are able to observe attributes such as demographics or other characteristics that affect people's preferences, then we can identify influence by controlling for these attributes. Based on this intuition, estimated influence effects by comparing adoption rates for pairs of similar users where only one of them had a friend who had adopted the item @cite_33 . Using data from Yahoo on adoption of a web service and 46 attributes based on personal and network characteristics, they found that most adoption was independent of influence. A fundamental problem, however, is that their method depends heavily on the careful choice and availability of attributes that predict similarity.
{ "cite_N": [ "@cite_33" ], "mid": [ "2149910108" ], "abstract": [ "Node characteristics and behaviors are often correlated with the structure of social networks over time. While evidence of this type of assortative mixing and temporal clustering of behaviors among linked nodes is used to support claims of peer influence and social contagion in networks, homophily may also explain such evidence. Here we develop a dynamic matched sample estimation framework to distinguish influence and homophily effects in dynamic networks, and we apply this framework to a global instant messaging network of 27.4 million users, using data on the day-by-day adoption of a mobile service application and users' longitudinal behavioral, demographic, and geographic data. We find that previous methods overestimate peer influence in product adoption decisions in this network by 300–700 , and that homophily explains >50 of the perceived behavioral contagion. These findings and methods are essential to both our understanding of the mechanisms that drive contagions in networks and our knowledge of how to propagate or combat them in domains as diverse as epidemiology, marketing, development economics, and public health." ] }
1604.01354
2335954719
Recovering the radiometric properties of a scene (i.e., the reflectance, illumination, and geometry) is a long-sought ability of computer vision that can provide invaluable information for a wide range of applications. Deciphering the radiometric ingredients from the appearance of a real-world scene, as opposed to a single isolated object, is particularly challenging as it generally consists of various objects with different material compositions exhibiting complex reflectance and light interactions that are also part of the illumination. We introduce the first method for radiometric scene decomposition that handles those intricacies. We use RGB-D images to bootstrap geometry recovery and simultaneously recover the complex reflectance and natural illumination while refining the noisy initial geometry and segmenting the scene into different material regions. Most important, we handle real-world scenes consisting of multiple objects of unknown materials, which necessitates the modeling of spatially-varying complex reflectance, natural illumination, texture, interreflection and shadows. We systematically evaluate the effectiveness of our method on synthetic scenes and demonstrate its application to real-world scenes. The results show that rich radiometric information can be recovered from RGB-D images and demonstrate a new role RGB-D sensors can play for general scene understanding tasks.
A few methods have been proposed in the past that concern the recovery of radiometric information from RGB-D images. Or- fuse depth and color information from multiple RGB-D images to estimate high resolution geometry @cite_15 . This is achieved by refining the geometry captured in the RGB-D image with shape-from-shading. Barron and Malik recover Lambertian reflectance, spatially-varying illumination, while also refining geometry from a single RGB-D image @cite_6 with a large-scale energy minimization using several unique priors. simultaneously estimate albedo and illumination from RGB-D by explicitly refining surface normals @cite_14 . These methods all heavily rely on the simplistic assumption of Lambertian reflectance and do not model the light interaction among scene elements.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_6" ], "mid": [ "1903480611", "", "2117751343" ], "abstract": [ "The popularity of low-cost RGB-D scanners is increasing on a daily basis. Nevertheless, existing scanners often cannot capture subtle details in the environment. We present a novel method to enhance the depth map by fusing the intensity and depth information to create more detailed range profiles. The lighting model we use can handle natural scene illumination. It is integrated in a shape from shading like technique to improve the visual fidelity of the reconstructed object. Unlike previous efforts in this domain, the detailed geometry is calculated directly, without the need to explicitly find and integrate surface normals. In addition, the proposed method operates four orders of magnitude faster than the state of the art. Qualitative and quantitative visual and statistical evidence support the improvement in the depth obtained by the suggested method.", "", "In this paper we extend the “shape, illumination and reflectance from shading” (SIRFS) model [3, 4], which recovers intrinsic scene properties from a single image. Though SIRFS performs well on images of segmented objects, it performs poorly on images of natural scenes, which contain occlusion and spatially-varying illumination. We therefore present Scene-SIRFS, a generalization of SIRFS in which we have a mixture of shapes and a mixture of illuminations, and those mixture components are embedded in a “soft” segmentation of the input image. We additionally use the noisy depth maps provided by RGB-D sensors (in this case, the Kinect) to improve shape estimation. Our model takes as input a single RGB-D image and produces as output an improved depth map, a set of surface normals, a reflectance image, a shading image, and a spatially varying model of illumination. The output of our model can be used for graphics applications, or for any application involving RGB-D images." ] }
1604.01354
2335954719
Recovering the radiometric properties of a scene (i.e., the reflectance, illumination, and geometry) is a long-sought ability of computer vision that can provide invaluable information for a wide range of applications. Deciphering the radiometric ingredients from the appearance of a real-world scene, as opposed to a single isolated object, is particularly challenging as it generally consists of various objects with different material compositions exhibiting complex reflectance and light interactions that are also part of the illumination. We introduce the first method for radiometric scene decomposition that handles those intricacies. We use RGB-D images to bootstrap geometry recovery and simultaneously recover the complex reflectance and natural illumination while refining the noisy initial geometry and segmenting the scene into different material regions. Most important, we handle real-world scenes consisting of multiple objects of unknown materials, which necessitates the modeling of spatially-varying complex reflectance, natural illumination, texture, interreflection and shadows. We systematically evaluate the effectiveness of our method on synthetic scenes and demonstrate its application to real-world scenes. The results show that rich radiometric information can be recovered from RGB-D images and demonstrate a new role RGB-D sensors can play for general scene understanding tasks.
Other methods attempt to estimate radiometric object (not scene) properties given very accurate geometry. estimate geometry and spatially-varying BRDFs from a set of images under known point-light illumination @cite_3 . Similar to our approach, the authors note that objects are typically composed of a small number of materials. They estimate each of these fundamental'' materials and a material weight map that controls the mixture of them at each pixel. Our approach to modeling scene reflectance is similar but has several critical distinctions. First, the segmentation of our scene is imposed over the 3D geometry rather than in the image space to model multiple views of a scene. We also leverage a set of geometric bases to model the spatial segmentation that encourages contiguous material regions. Most important, we do not allow a mixture of fundamental materials (i.e., a surface point only exhibits one of @math base materials) and decouple the texture from reflectance, which we believe is a more faithful model of complex appearance of real-world scenes.
{ "cite_N": [ "@cite_3" ], "mid": [ "2165749022" ], "abstract": [ "This paper describes a photometric stereo method designed for surfaces with spatially-varying BRDFs, including surfaces with both varying diffuse and specular properties. Our method builds on the observation that most objects are composed of a small number of fundamental materials. This approach recovers not only the shape but also material BRDFs and weight maps, yielding compelling results for a wide variety of objects. We also show examples of interactive lighting and editing operations made possible by our method." ] }
1604.01282
2335881901
We prove that the facial nonrepetitive chromatic number of any outerplanar graph is at most 11 and of any planar graph is at most 22.
Barát and Varjú @cite_33 and K "u ndgen and Pelsmajer @cite_30 independently showed that @math if @math is outerplanar and, more generally, @math if @math has treewidth at most @math . (Barát and Varjú proved the latter bound with @math while K "u ndgen and Pelsmajer proved it with @math .) The bound of @math for @math -trees is tight if @math (trees), but it is not known if it is tight for other values of @math . Even the upper bound of 12 for outerplanar graphs may not be tight, as no outerplanar graph with nonrepetititive chromatic number greater than 7 is known @cite_33 .
{ "cite_N": [ "@cite_30", "@cite_33" ], "mid": [ "2105469400", "2141481529" ], "abstract": [ "A sequence of the form s\"1s\"2...s\"ms\"1s\"2...s\"m is called a repetition. A vertex-coloring of a graph is called nonrepetitive if none of its paths is repetitively colored. We answer a question of Grytczuk [Thue type problems for graphs, points and numbers, manuscript] by proving that every outerplanar graph has a nonrepetitive 12-coloring. We also show that graphs of tree-width t have nonrepetitive 4^t-colorings.", "A sequence of symbols a 1 , a 2 … is called square-free if it does not contain a subsequence of consecutive terms of the form x 1 , …, x m , x 1 , …, x m . A century ago Thue showed that there exist arbitrarily long square-free sequences using only three symbols. Sequences can be thought of as colors on the vertices of a path. Following the paper of Alon, Grytczuk, Haluszczak and Riordan, we examine graph colorings for which the color sequence is square-free on any path. The main result is that the vertices of any k -tree have a coloring of this kind using O ( c k ) colors if c > 6. conjectured that a fixed number of colors suffices for any planar graph. We support this conjecture by showing that this number is at most 12 for outerplanar graphs. On the other hand we prove that some outerplanar graphs require at least 7 colors. Using this latter we construct planar graphs, for which at least 10 colors are necessary." ] }
1604.01325
2951239727
We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com Deep-Image-Retrieval.
Conventional image retrieval Early techniques for instance-level retrieval are based on bag-of-features representations with large vocabularies and inverted files @cite_60 @cite_55 . Numerous methods to better approximate the matching of the descriptors have been proposed, see @cite_38 @cite_1 . An advantage of these techniques is that spatial verification can be employed to re-rank a short-list of results @cite_55 @cite_28 , yielding a significant improvement despite a significant cost. Concurrently, methods that aggregate the local image patches have been considered. Encoding techniques, such as the Fisher Vector @cite_39 , or VLAD @cite_49 , combined with compression @cite_50 @cite_44 @cite_40 produce global descriptors that scale to larger databases at the cost of reduced accuracy. All these methods can be combined with other post-processing techniques such as query expansion @cite_25 @cite_0 @cite_22 .
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_60", "@cite_28", "@cite_55", "@cite_1", "@cite_39", "@cite_44", "@cite_0", "@cite_40", "@cite_49", "@cite_50", "@cite_25" ], "mid": [ "", "1979931042", "2128017662", "1972378554", "2141362318", "2088866137", "2147238549", "", "2023991840", "1645270849", "2012592962", "", "2100398441" ], "abstract": [ "", "The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets.", "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.", "State of the art methods for image and object retrieval exploit both appearance (via visual words) and local geometry (spatial extent, relative pose). In large scale problems, memory becomes a limiting factor - local geometry is stored for each feature detected in each image and requires storage larger than the inverted file and term frequency and inverted document frequency weights together. We propose a novel method for learning discretized local geometry representation based on minimization of average reprojection error in the space of ellipses. The representation requires only 24 bits per feature without drop in performance. Additionally, we show that if the gravity vector assumption is used consistently from the feature description to spatial verification, it improves retrieval performance and decreases the memory footprint. The proposed method outperforms state of the art retrieval algorithms in a standard image retrieval benchmark.", "In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, \"web-scale \" image corpora.", "This article improves recent methods for large scale image search. We first analyze the bag-of-features approach in the framework of approximate nearest neighbor search. This leads us to derive a more precise representation based on Hamming embedding (HE) and weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within an inverted file and are efficiently exploited for all images in the dataset. We then introduce a graph-structured quantizer which significantly speeds up the assignment of the descriptors to visual words. A comparison with the state of the art shows the interest of our approach when high accuracy is needed. Experiments performed on three reference datasets and a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short-list of images, is shown to be complementary to our weak geometric consistency constraints. Our approach is shown to outperform the state-of-the-art on the three datasets.", "Within the field of pattern classification, the Fisher kernel is a powerful framework which combines the strengths of generative and discriminative approaches. The idea is to characterize a signal with a gradient vector derived from a generative probability model and to subsequently feed this representation to a discriminative classifier. We propose to apply this framework to image categorization where the input signals are images and where the underlying generative model is a visual vocabulary: a Gaussian mixture model which approximates the distribution of low-level features in images. We show that Fisher kernels can actually be understood as an extension of the popular bag-of-visterms. Our approach demonstrates excellent performance on two challenging databases: an in-house database of 19 object scene categories and the recently released VOC 2006 database. It is also very practical: it has low computational needs both at training and test time and vocabularies trained on one set of categories can be applied to another set without any significant loss in performance.", "", "Most effective particular object and image retrieval approaches are based on the bag-of-words (BoW) model. All state-of-the-art retrieval results have been achieved by methods that include a query expansion that brings a significant boost in performance. We introduce three extensions to automatic query expansion: (i) a method capable of preventing tf-idf failure caused by the presence of sets of correlated features (confusers), (ii) an improved spatial verification and re-ranking step that incrementally builds a statistical model of the query object and (iii) we learn relevant spatial context to boost retrieval performance. The three improvements of query expansion were evaluated on standard Paris and Oxford datasets according to a standard protocol, and state-of-the-art results were achieved.", "This paper addresses the construction of a short-vector (128D) image representation for large-scale image and particular object retrieval. In particular, the method of joint dimensionality reduction of multiple vocabularies is considered. We study a variety of vocabulary generation techniques: different k-means initializations, different descriptor transformations, different measurement regions for descriptor extraction. Our extensive evaluation shows that different combinations of vocabularies, each partitioning the descriptor space in a different yet complementary manner, results in a significant performance improvement, which exceeds the state-of-the-art.", "We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms.", "", "Given a query image of an object, our objective is to retrieve all instances of that object in a large (1M+) image database. We adopt the bag-of-visual-words architecture which has proven successful in achieving high precision at low recall. Unfortunately, feature detection and quantization are noisy processes and this can result in variation in the particular visual words that appear in different images of the same object, leading to missed results. In the text retrieval literature a standard method for improving performance is query expansion. A number of the highly ranked documents from the original query are reissued as a new query. In this way, additional relevant terms can be added to the query. This is a form of blind rele- vance feedback and it can fail if 'outlier' (false positive) documents are included in the reissued query. In this paper we bring query expansion into the visual domain via two novel contributions. Firstly, strong spatial constraints between the query image and each result allow us to accurately verify each return, suppressing the false positives which typically ruin text-based query expansion. Secondly, the verified images can be used to learn a latent feature model to enable the controlled construction of expanded queries. We illustrate these ideas on the 5000 annotated image Oxford building database together with more than 1M Flickr images. We show that the precision is substantially boosted, achieving total recall in many cases." ] }
1604.01325
2951239727
We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com Deep-Image-Retrieval.
CNN-based retrieval After their success in classification @cite_14 , CNN features were used as off-the-shelf features for image retrieval @cite_27 @cite_19 . Although they outperform other standard global descriptors, their performance is significantly below the state of the art. Several improvements were proposed to overcome their lack of robustness to scaling, cropping and image clutter. @cite_27 performs region cross-matching and accumulates the maximum similarity per query region. @cite_30 applies sum-pooling to whitened region descriptors. @cite_29 extends @cite_30 by allowing cross-dimensional weighting and aggregation of neural codes. Other approaches proposed hybrid models involving an encoding technique such as FV @cite_56 or VLAD @cite_3 @cite_51 , potentially learnt as well @cite_5 as one of their components.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_29", "@cite_3", "@cite_56", "@cite_19", "@cite_27", "@cite_5", "@cite_51" ], "mid": [ "1833123814", "", "2951136860", "1524680991", "", "", "", "2951019013", "2212363941" ], "abstract": [ "Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It has also been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregation approaches developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptors. In this paper we investigate possible ways to aggregate local deep features to produce compact global descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides arguably the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably.", "", "We propose a simple and straightforward way of creating powerful image representations via cross-dimensional weighting and aggregation of deep convolutional neural network layer outputs. We first present a generalized framework that encompasses a broad family of approaches and includes cross-dimensional pooling and weighting steps. We then propose specific non-parametric schemes for both spatial- and channel-wise weighting that boost the effect of highly active spatial responses and at the same time regulate burstiness effects. We experiment on different public datasets for image search and show that our approach outperforms the current state-of-the-art for approaches based on pre-trained networks. We also provide an easy-to-use, open source implementation that reproduces our results.", "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets.", "", "", "", "We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following three principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the \"Vector of Locally Aggregated Descriptors\" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we develop a training procedure, based on a new weakly supervised ranking loss, to learn parameters of the architecture in an end-to-end manner from images depicting the same places over time downloaded from Google Street View Time Machine. Finally, we show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current state-of-the-art compact image representations on standard image retrieval benchmarks.", "Patch-level descriptors underlie several important computer vision tasks, such as stereo-matching or content-based image retrieval. We introduce a deep convolutional architecture that yields patch-level descriptors, as an alternative to the popular SIFT descriptor for image retrieval. The proposed family of descriptors, called Patch-CKN, adapt the recently introduced Convolutional Kernel Network (CKN), an unsupervised framework to learn convolutional architectures. We present a comparison framework to benchmark current deep convolutional approaches along with Patch-CKN for both patch and image retrieval, including our novel \"RomePatches\" dataset. Patch-CKN descriptors yield competitive results compared to supervised CNN alternatives on patch and image retrieval." ] }
1604.01325
2951239727
We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com Deep-Image-Retrieval.
Siamese networks and metric learning Siamese networks have commonly been used for metric learning @cite_41 , dimensionality reduction @cite_6 , learning image descriptors @cite_58 , and performing face identification @cite_33 @cite_12 @cite_8 . Recently triplet networks (i.e. three stream Siamese networks) have been considered for metric learning @cite_54 @cite_59 and face identification @cite_47 . However, these Siamese networks usually rely on simpler network architectures than the one we use here, which involves pooling and aggregation of several regions.
{ "cite_N": [ "@cite_33", "@cite_8", "@cite_41", "@cite_54", "@cite_6", "@cite_59", "@cite_47", "@cite_58", "@cite_12" ], "mid": [ "2157364932", "", "2176040302", "1839408879", "2138621090", "", "2096733369", "1869500417", "2076434944" ], "abstract": [ "We present a method for training a similarity metric from data. The method can be used for recognition or verification applications where the number of categories is very large and not known during training, and where the number of training samples for a single category is very small. The idea is to learn a function that maps input patterns into a target space such that the L sub 1 norm in the target space approximates the \"semantic\" distance in the input space. The method is applied to a face verification task. The learning process minimizes a discriminative loss function that drives the similarity metric to be small for pairs of faces from the same person, and large for pairs from different persons. The mapping from raw to the target space is a convolutional network whose architecture is designed for robustness to geometric distortions. The system is tested on the Purdue AR face database which has a very high degree of variability in the pose, lighting, expression, position, and artificial occlusions such as dark glasses and obscuring scarves.", "", "Learning the distance metric between pairs of examples is of great importance for learning and visual recognition. With the remarkable success from the state of the art convolutional neural networks, recent works have shown promising results on discriminatively training the networks to learn semantic feature embeddings where similar examples are mapped close to each other and dissimilar examples are mapped farther apart. In this paper, we describe an algorithm for taking full advantage of the training batches in the neural network training by lifting the vector of pairwise distances within the batch to the matrix of pairwise distances. This step enables the algorithm to learn the state of the art feature embedding by optimizing a novel structured prediction objective on the lifted problem. Additionally, we collected Online Products dataset: 120k images of 23k classes of online products for metric learning. Our experiments on the CUB-200-2011, CARS196, and Online Products datasets demonstrate significant improvement over existing deep feature embedding methods on all experimented embedding sizes with the GoogLeNet network.", "Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.", "Dimensionality reduction involves mapping a set of high dimensional input points onto a low dimensional manifold so that 'similar\" points in input space are mapped to nearby points on the manifold. We present a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold. The learning relies solely on neighborhood relationships and does not require any distancemeasure in the input space. The method can learn mappings that are invariant to certain transformations of the inputs, as is demonstrated with a number of experiments. Comparisons are made to other techniques, in particular LLE.", "", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "Deep learning has revolutionalized image-level tasks such as classification, but patch-level tasks, such as correspondence, still rely on hand-crafted features, e.g. SIFT. In this paper we use Convolutional Neural Networks (CNNs) to learn discriminant patch representations and in particular train a Siamese network with pairs of (non-)corresponding patches. We deal with the large number of potential pairs with the combination of a stochastic sampling of the training set and an aggressive mining strategy biased towards patches that are hard to classify. By using the L2 distance during both training and testing we develop 128-D descriptors whose euclidean distances reflect patch similarity, and which can be used as a drop-in replacement for any task involving SIFT. We demonstrate consistent performance gains over the state of the art, and generalize well against scaling and rotation, perspective transformation, non-rigid deformation, and illumination changes. Our descriptors are efficient to compute and amenable to modern GPUs, and are publicly available.", "This paper presents a new discriminative deep metric learning (DDML) method for face verification in the wild. Different from existing metric learning-based face verification methods which aim to learn a Mahalanobis distance metric to maximize the inter-class variations and minimize the intra-class variations, simultaneously, the proposed DDML trains a deep neural network which learns a set of hierarchical nonlinear transformations to project face pairs into the same feature subspace, under which the distance of each positive face pair is less than a smaller threshold and that of each negative pair is higher than a larger threshold, respectively, so that discriminative information can be exploited in the deep network. Our method achieves very competitive face verification performance on the widely used LFW and YouTube Faces (YTF) datasets." ] }
1604.00589
2341067113
In this study, we propose a variation of the RAdNet for vehicular environments (RAdNet-VE). The proposed scheme extends the message header, mechanism for registering interest, and message forwarding mechanism of RAdNet. To obtain results, we performed simulation experiments involving two use scenarios and communication protocols developed from the Veins framework. Based on results obtained from these experiments, we compare the performance of RAdNet-VE against that of RAdNet, a basic content-centric network (CCN) using reactive data routing, (CCN @math ), and a basic CCN using proactive data routing, CCN @math . These CCNs provide non-cacheable data services. Moreover, the communication radio standards adopted in the scenarios 1 and 2 were respectively IEEE 802.11n and IEEE 802.11p. The results shown that the performance of the RAdNet-VE was superior to than those of RAdNet, CCN @math and CCN @math . In this sense, RAdNet-VE protocol (RVEP) presented low communication latencies among nodes of just 20.4ms (scenario 1) and 2.87 ms (scenario 2). Our protocol also presented high data delivery rates, i.e, 83.05 (scenario 1) and 88.05 (scenario 2). Based on these and other results presented in this study, we argue that RAdNet-VE is a feasible alternative to CCNs as information-centric network (ICN) model for VANET, because the RVEP satisfies all of the necessary communication requirements.
There have been several studies on CCNs for VANETs. Most prominent are those conducted by Arnould . @cite_27 , Amadeo . @cite_4 @cite_3 and Wang @cite_18 @cite_25 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_3", "@cite_27", "@cite_25" ], "mid": [ "1988342793", "2105205352", "2075120071", "2030307498", "2014952121" ], "abstract": [ "Vehicular networking is becoming reality. Today vehicles use TCP IP to communicate with centralized servers through cellular networks. However many vehicular applications, such as information sharing for safety and real time traffic purposes, desire direct V2V communications which is difficult to achieve using the existing solutions. This paper explores the named-data approach to address this challenge. We use case studies to identify the design requirements and put forth a strawman proposal for the data name design to understand its advantages and limitations.", "Content-centric networking is a new paradigm conceived for future Internet architectures, where communications are driven by contents instead of host addresses. This paradigm has key potentialities to enable effective and efficient communications in the challenging vehicular environment characterized by short-lived connectivity and highly dynamic network topologies. We design CRoWN, a content-centric framework for vehicular ad-hoc networks, which is implemented on top of the IEEE 802.11p standard layers and is fully compliant with them. Performance comparison against the legacy IP-based approach demonstrates the superiority of CRoWN, thus paving the way for content-centric vehicular networking.", "Content-Centric Networking (CCN) is a new popular communication paradigm that achieves information retrieval and distribution by using named data instead of end-to-end host-centric communications. This innovative model particularly fits mobile wireless environments characterized by dynamic topologies, unreliable broadcast channels, short-lived and intermittent connectivity, as proven by preliminary works in the literature. In this paper we extend the CCN framework to efficiently and reliably support content delivery on top of IEEE 802.11p vehicular technology. Achieved results show that the proposed solution, by leveraging distributed broadcast storm mitigation techniques, simple transport routines, and lightweight soft-state forwarding procedures, brings significant improvements w.r.t. a plain CCN model, confirming the effectiveness and efficiency of our design choices.", "This paper proposes a novel network architecture based on content routing and dissemination techniques, expressly adapted for hybrid mobile and vehicular ad-hoc networks. Hybrid vehicular ad-hoc networks (VANETs) are a subclass of mobile ad-hoc networks (MANETs) where each node is a moving vehicle equipped with one or more communication devices, potentially having access to several different physical communication channels. This results in a partly decentralized, self-organizing mobile radio network with a fast changing topology. Applications designed for VANETs still require efficient mapping techniques to associate content to the host serving it, as well as centralized databases associating vehicles and mobile nodes to a persistent and unique identifier. They also assume that the content providers are located in the infrastructure, but nodes and vehicles can be equipped with sensors and as such produce information. This means that dissemination and retrieval algorithms have to take into account the distribution of the consumers, providers and forwarder all over the network. We propose an architecture based on Van Jacobson's Content Centric Networking, extended and modified to solve some of the issues related to content access and dissemination across hybrid VANETs. The implementation and simulation on a realistic platform are presented as well as an application to a real case scenario, demonstrating the efficiency of the approach.", "Network use has evolved to be dominated by content distribution and retrieval, while networking technology still speaks only of connections between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can simultaneously achieve scalability, security and performance. We implemented our architecture's basic features and demonstrate resilience and performance with secure file downloads and VoIP calls." ] }
1604.00589
2341067113
In this study, we propose a variation of the RAdNet for vehicular environments (RAdNet-VE). The proposed scheme extends the message header, mechanism for registering interest, and message forwarding mechanism of RAdNet. To obtain results, we performed simulation experiments involving two use scenarios and communication protocols developed from the Veins framework. Based on results obtained from these experiments, we compare the performance of RAdNet-VE against that of RAdNet, a basic content-centric network (CCN) using reactive data routing, (CCN @math ), and a basic CCN using proactive data routing, CCN @math . These CCNs provide non-cacheable data services. Moreover, the communication radio standards adopted in the scenarios 1 and 2 were respectively IEEE 802.11n and IEEE 802.11p. The results shown that the performance of the RAdNet-VE was superior to than those of RAdNet, CCN @math and CCN @math . In this sense, RAdNet-VE protocol (RVEP) presented low communication latencies among nodes of just 20.4ms (scenario 1) and 2.87 ms (scenario 2). Our protocol also presented high data delivery rates, i.e, 83.05 (scenario 1) and 88.05 (scenario 2). Based on these and other results presented in this study, we argue that RAdNet-VE is a feasible alternative to CCNs as information-centric network (ICN) model for VANET, because the RVEP satisfies all of the necessary communication requirements.
Arnould @cite_27 , applied a CCN model to disseminate critical information in a hybrid VANET. Their study proposed an active data delivery mechanism, named the event packet, which does not require the prior sending of an interest packet in order that a data delivery occurs. The publisher detects critical events using sensors embedded in a vehicle and broadcasts event packets containing information related to delay-sensitive events such as accident information, safety alerts, and collision warnings. However, the authors modified the original CCN architecture to control the dissemination of event packets according to the bandwidth they require. This increased complexity may cause operational failures in high-demand environments.
{ "cite_N": [ "@cite_27" ], "mid": [ "2030307498" ], "abstract": [ "This paper proposes a novel network architecture based on content routing and dissemination techniques, expressly adapted for hybrid mobile and vehicular ad-hoc networks. Hybrid vehicular ad-hoc networks (VANETs) are a subclass of mobile ad-hoc networks (MANETs) where each node is a moving vehicle equipped with one or more communication devices, potentially having access to several different physical communication channels. This results in a partly decentralized, self-organizing mobile radio network with a fast changing topology. Applications designed for VANETs still require efficient mapping techniques to associate content to the host serving it, as well as centralized databases associating vehicles and mobile nodes to a persistent and unique identifier. They also assume that the content providers are located in the infrastructure, but nodes and vehicles can be equipped with sensors and as such produce information. This means that dissemination and retrieval algorithms have to take into account the distribution of the consumers, providers and forwarder all over the network. We propose an architecture based on Van Jacobson's Content Centric Networking, extended and modified to solve some of the issues related to content access and dissemination across hybrid VANETs. The implementation and simulation on a realistic platform are presented as well as an application to a real case scenario, demonstrating the efficiency of the approach." ] }
1604.00589
2341067113
In this study, we propose a variation of the RAdNet for vehicular environments (RAdNet-VE). The proposed scheme extends the message header, mechanism for registering interest, and message forwarding mechanism of RAdNet. To obtain results, we performed simulation experiments involving two use scenarios and communication protocols developed from the Veins framework. Based on results obtained from these experiments, we compare the performance of RAdNet-VE against that of RAdNet, a basic content-centric network (CCN) using reactive data routing, (CCN @math ), and a basic CCN using proactive data routing, CCN @math . These CCNs provide non-cacheable data services. Moreover, the communication radio standards adopted in the scenarios 1 and 2 were respectively IEEE 802.11n and IEEE 802.11p. The results shown that the performance of the RAdNet-VE was superior to than those of RAdNet, CCN @math and CCN @math . In this sense, RAdNet-VE protocol (RVEP) presented low communication latencies among nodes of just 20.4ms (scenario 1) and 2.87 ms (scenario 2). Our protocol also presented high data delivery rates, i.e, 83.05 (scenario 1) and 88.05 (scenario 2). Based on these and other results presented in this study, we argue that RAdNet-VE is a feasible alternative to CCNs as information-centric network (ICN) model for VANET, because the RVEP satisfies all of the necessary communication requirements.
@cite_4 @cite_3 proposed the CRoWN architecture @cite_4 and content-centric vehicular networking (CCVN) @cite_3 . They proposed a CCN-based framework for VANETs, and evaluated its performance using the IEEE 802.11p standard. According to @cite_4 , CCN-based VANETs exhibit better performance than IP-based VANETs in terms of data transmission and the load balance of vehicles in the network, and suffer less performance degradation as the data volume increases. Moreover, the authors divided the interest packets into two sub-types basic interests (B-Int) and advanced interests (A-Int). B-Int was sent when a consumer wished to discover content and requested the first segment, whereas A-Int was used to request subsequent content from previously discovered providers. Moreover, the authors introduced a new data structure named the content provider table (CPT) to replace the forwarding information base (FIB). The CPT stores information regarding providers that have already been discovered and associates the MAC address of these nodes with the content. Thus, the authors discarded the main CCN premise, which is the independence of content from its physical location. This conceptual rupture of the original CCN proposal may compromise the support of the mobility of the nodes.
{ "cite_N": [ "@cite_4", "@cite_3" ], "mid": [ "2105205352", "2075120071" ], "abstract": [ "Content-centric networking is a new paradigm conceived for future Internet architectures, where communications are driven by contents instead of host addresses. This paradigm has key potentialities to enable effective and efficient communications in the challenging vehicular environment characterized by short-lived connectivity and highly dynamic network topologies. We design CRoWN, a content-centric framework for vehicular ad-hoc networks, which is implemented on top of the IEEE 802.11p standard layers and is fully compliant with them. Performance comparison against the legacy IP-based approach demonstrates the superiority of CRoWN, thus paving the way for content-centric vehicular networking.", "Content-Centric Networking (CCN) is a new popular communication paradigm that achieves information retrieval and distribution by using named data instead of end-to-end host-centric communications. This innovative model particularly fits mobile wireless environments characterized by dynamic topologies, unreliable broadcast channels, short-lived and intermittent connectivity, as proven by preliminary works in the literature. In this paper we extend the CCN framework to efficiently and reliably support content delivery on top of IEEE 802.11p vehicular technology. Achieved results show that the proposed solution, by leveraging distributed broadcast storm mitigation techniques, simple transport routines, and lightweight soft-state forwarding procedures, brings significant improvements w.r.t. a plain CCN model, confirming the effectiveness and efficiency of our design choices." ] }
1604.00589
2341067113
In this study, we propose a variation of the RAdNet for vehicular environments (RAdNet-VE). The proposed scheme extends the message header, mechanism for registering interest, and message forwarding mechanism of RAdNet. To obtain results, we performed simulation experiments involving two use scenarios and communication protocols developed from the Veins framework. Based on results obtained from these experiments, we compare the performance of RAdNet-VE against that of RAdNet, a basic content-centric network (CCN) using reactive data routing, (CCN @math ), and a basic CCN using proactive data routing, CCN @math . These CCNs provide non-cacheable data services. Moreover, the communication radio standards adopted in the scenarios 1 and 2 were respectively IEEE 802.11n and IEEE 802.11p. The results shown that the performance of the RAdNet-VE was superior to than those of RAdNet, CCN @math and CCN @math . In this sense, RAdNet-VE protocol (RVEP) presented low communication latencies among nodes of just 20.4ms (scenario 1) and 2.87 ms (scenario 2). Our protocol also presented high data delivery rates, i.e, 83.05 (scenario 1) and 88.05 (scenario 2). Based on these and other results presented in this study, we argue that RAdNet-VE is a feasible alternative to CCNs as information-centric network (ICN) model for VANET, because the RVEP satisfies all of the necessary communication requirements.
Wang @cite_18 presented a CCN architecture, but did not consider which applications or events would significantly affect its design. Moreover, the proposed architecture cannot be used efficiently in applications based on vehicle-to-vehicle communications. Wang @cite_25 also proposed a packet dissemination mechanism to reduce the latency of content delivery. This mechanism employed timers to coordinate the packet sending among the network nodes. However, the proposal of the mechanism did not establish the limits of interest dissemination within a geographical area. Thus, the flooding problem persisted. Finally, their evaluation did not consider how the mobility of the nodes would impact the proposed mechanism.
{ "cite_N": [ "@cite_18", "@cite_25" ], "mid": [ "1988342793", "2014952121" ], "abstract": [ "Vehicular networking is becoming reality. Today vehicles use TCP IP to communicate with centralized servers through cellular networks. However many vehicular applications, such as information sharing for safety and real time traffic purposes, desire direct V2V communications which is difficult to achieve using the existing solutions. This paper explores the named-data approach to address this challenge. We use case studies to identify the design requirements and put forth a strawman proposal for the data name design to understand its advantages and limitations.", "Network use has evolved to be dominated by content distribution and retrieval, while networking technology still speaks only of connections between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can simultaneously achieve scalability, security and performance. We implemented our architecture's basic features and demonstrate resilience and performance with secure file downloads and VoIP calls." ] }
1604.00589
2341067113
In this study, we propose a variation of the RAdNet for vehicular environments (RAdNet-VE). The proposed scheme extends the message header, mechanism for registering interest, and message forwarding mechanism of RAdNet. To obtain results, we performed simulation experiments involving two use scenarios and communication protocols developed from the Veins framework. Based on results obtained from these experiments, we compare the performance of RAdNet-VE against that of RAdNet, a basic content-centric network (CCN) using reactive data routing, (CCN @math ), and a basic CCN using proactive data routing, CCN @math . These CCNs provide non-cacheable data services. Moreover, the communication radio standards adopted in the scenarios 1 and 2 were respectively IEEE 802.11n and IEEE 802.11p. The results shown that the performance of the RAdNet-VE was superior to than those of RAdNet, CCN @math and CCN @math . In this sense, RAdNet-VE protocol (RVEP) presented low communication latencies among nodes of just 20.4ms (scenario 1) and 2.87 ms (scenario 2). Our protocol also presented high data delivery rates, i.e, 83.05 (scenario 1) and 88.05 (scenario 2). Based on these and other results presented in this study, we argue that RAdNet-VE is a feasible alternative to CCNs as information-centric network (ICN) model for VANET, because the RVEP satisfies all of the necessary communication requirements.
The main gap in the CCN literature is the absence of studies that consider scenarios based on applications that use non-cacheable data services @cite_2 . Therefore, the studies described in this section have provided the background to build two types of CCN with non-cacheable data service: (i) CCN @math : this is a basic implementation of a CCN with non-cacheable data service using reactive data routing @cite_3 . (ii) CCN @math : this is a basic implementation of a CCN with non-cacheable data service using proactive data routing @cite_25 . We created these CCNs to compare their performance with that of our proposed RVEP.
{ "cite_N": [ "@cite_25", "@cite_3", "@cite_2" ], "mid": [ "2014952121", "2075120071", "" ], "abstract": [ "Network use has evolved to be dominated by content distribution and retrieval, while networking technology still speaks only of connections between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can simultaneously achieve scalability, security and performance. We implemented our architecture's basic features and demonstrate resilience and performance with secure file downloads and VoIP calls.", "Content-Centric Networking (CCN) is a new popular communication paradigm that achieves information retrieval and distribution by using named data instead of end-to-end host-centric communications. This innovative model particularly fits mobile wireless environments characterized by dynamic topologies, unreliable broadcast channels, short-lived and intermittent connectivity, as proven by preliminary works in the literature. In this paper we extend the CCN framework to efficiently and reliably support content delivery on top of IEEE 802.11p vehicular technology. Achieved results show that the proposed solution, by leveraging distributed broadcast storm mitigation techniques, simple transport routines, and lightweight soft-state forwarding procedures, brings significant improvements w.r.t. a plain CCN model, confirming the effectiveness and efficiency of our design choices.", "" ] }
1604.00590
2340516326
Preferential attachment models have been widely studied in complex networks, because they can explain the formation of many networks like social networks, citation networks, power grids, and biological networks, to name a few. Motivated by the application of key predistribution in wireless sensor networks (WSN), we initiate the study of preferential attachment with degree bound. Our paper has two important contributions to two different areas. The first is a contribution in the study of complex networks. We propose preferential attachment model with degree bound for the first time. In the normal preferential attachment model, the degree distribution follows a power law, with many nodes of low degree and a few nodes of high degree. In our scheme, the nodes can have a maximum degree @math , where @math is an integer chosen according to the application. The second is in the security of wireless sensor networks. We propose a new key predistribution scheme based on the above model. The important features of this model are that the network is fully connected, it has fewer keys, has larger size of the giant component and lower average path length compared with traditional key predistribution schemes and comparable resilience to random node attacks. We argue that in many networks like key predistribution and Internet of Things, having nodes of very high degree will be a bottle-neck in communication. Thus, studying preferential attachment model with degree bound will open up new directions in the study of complex networks, and will have many applications in real world scenarios.
Thereafter, many key predistribution schemes have been proposed, which trade-off storage with resilience and connectivity. Combinatorial schemes like @cite_15 @cite_6 @cite_13 are a class of deterministic schemes which were very popular because of their simplicity of construction and simple algorithms to find key sharing neighbors. However, such schemes do not scale well and have poor resilience. Camtepe and Yener @cite_15 were the first to propose such schemes using combinatorial designs called . Lee and Stinson @cite_6 proposed KPD schemes using transversal designs, while Ruj and Roy @cite_13 @cite_3 used partially balanced incomplete block designs and Reed-Solomon codes. In Section , we will show that the Lee-Stinson (LS) scheme @cite_6 has very poor resilience compared to random key predistribution schemes. A survey of such schemes appear in @cite_23 @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_3", "@cite_6", "@cite_23", "@cite_15", "@cite_13" ], "mid": [ "2150604138", "1799001532", "1529909807", "2080765376", "", "" ], "abstract": [ "We address pairwise and (for the first time) triple key establishment problems in wireless sensor networks (WSN). We use combinatorial designs to establish pairwise keys between nodes in a WSN. A BIBD(v; b; r; k; λ) (or t - (v; b; r; k; λ)) design can be mapped to a sensor network, where v represents the size of the key pool, b represents the maximum number of nodes that the network can support, k represents the size of the key chain. Any pair (or t-subset) of keys occurs together uniquely in exactly λ nodes. λ = 2 and λ = 3 are used to establish unique pairwise or triple keys. Our pairwise key distribution is the first one that is fully secure (none of the links among uncompromised nodes is affected) and applicable for mobile sensor networks (as key distribution is independent on the connectivity graph), while preserving low storage, computation and communication requirements. We also use combinatorial trades to establish pairwise keys. This is the first time that trades are being applied to key management. We describe a new construction of Strong Steiner Trades. We introduce a novel concept of triple key distribution, in which a common key is established between three nodes. This allows secure passive monitoring of forwarding progress in routing tasks. We present a polynomial-based approach and a combinatorial approach (using trades) for triple key distribution.", "In this paper we propose novel deterministic key predistribution schemes using codes. In particular we use Reed Solomon codes to present key predistribution scheme, which is better than existing schemes in terms of connectivity. We show that selective node compromise attack cannot be launched, because there is no clever way of knowing which nodes to compromise. An important advantage of our scheme is that we can increase the scalability of the network without redistributing keys in the existing nodes.", "It is an important issue to establish pairwise keys in distributed sensor networks (DSNs). In this paper, we present two key predistribution schemes (KPSs) for DSNs, ID-based one-way function scheme (IOS) and deterministic multiple space Blom's scheme (DMBS). Our schemes are deterministic, while most existing schemes are based on randomized approach. We show that the performance of our schemes is better than other existing schemes in terms of resiliency against coalition attack. In addition we obtain perfectly resilient KPSs such that the maximum supportable network size is larger than random pairwise keys schemes.", "Wireless sensor networks have many applications, vary in size, and are deployed in a wide variety of areas. They are often deployed in potentially adverse or even hostile environment so that there are concerns on security issues in these networks. Sensor nodes used to form these networks are resource-constrained, which make security applications a challenging problem. Efficient key distribution and management mechanisms are needed besides lightweight ciphers. Many key establishment techniques have been designed to address the tradeoff between limited memory and security, but which scheme is the most effective is still debatable. In this paper, we provide a survey of key management schemes in wireless sensor networks. We notice that no key distribution technique is ideal to all the scenarios where sensor networks are used; therefore the techniques employed must depend upon the requirements of target applications and resources of each individual sensor network.", "", "" ] }
1604.00590
2340516326
Preferential attachment models have been widely studied in complex networks, because they can explain the formation of many networks like social networks, citation networks, power grids, and biological networks, to name a few. Motivated by the application of key predistribution in wireless sensor networks (WSN), we initiate the study of preferential attachment with degree bound. Our paper has two important contributions to two different areas. The first is a contribution in the study of complex networks. We propose preferential attachment model with degree bound for the first time. In the normal preferential attachment model, the degree distribution follows a power law, with many nodes of low degree and a few nodes of high degree. In our scheme, the nodes can have a maximum degree @math , where @math is an integer chosen according to the application. The second is in the security of wireless sensor networks. We propose a new key predistribution scheme based on the above model. The important features of this model are that the network is fully connected, it has fewer keys, has larger size of the giant component and lower average path length compared with traditional key predistribution schemes and comparable resilience to random node attacks. We argue that in many networks like key predistribution and Internet of Things, having nodes of very high degree will be a bottle-neck in communication. Thus, studying preferential attachment model with degree bound will open up new directions in the study of complex networks, and will have many applications in real world scenarios.
Blackburn and Gerke @cite_1 studied . The problem is to attach a list of @math colors to each of @math nodes, the colors being chosen uniformly at random from a set of @math colors, such that two nodes are connected if they have a common color in their list. It can be seen that EG network is an example of these types of graphs. They analyzed the connectivity of such graphs.
{ "cite_N": [ "@cite_1" ], "mid": [ "2008111483" ], "abstract": [ "A uniform random intersection graphG(n,m,k) is a random graph constructed as follows. Label each of n nodes by a randomly chosen set of k distinct colours taken from some finite set of possible colours of size m. Nodes are joined by an edge if and only if some colour appears in both their labels. These graphs arise in the study of the security of wireless sensor networks, in particular when modelling the network graph of the well-known key predistribution technique due to Eschenauer and Gligor. The paper determines the threshold for connectivity of the graph G(n,m,k) when n-> in many situations. For example, when k is a function of n such that k>=2 and [email protected]?n^@[email protected]? for some fixed positive real number @a then G(n,m,k) is almost surely connected when lim infk^2n mlogn>1, and G(n,m,k) is almost surely disconnected when lim supk^2n mlogn<1." ] }
1604.00590
2340516326
Preferential attachment models have been widely studied in complex networks, because they can explain the formation of many networks like social networks, citation networks, power grids, and biological networks, to name a few. Motivated by the application of key predistribution in wireless sensor networks (WSN), we initiate the study of preferential attachment with degree bound. Our paper has two important contributions to two different areas. The first is a contribution in the study of complex networks. We propose preferential attachment model with degree bound for the first time. In the normal preferential attachment model, the degree distribution follows a power law, with many nodes of low degree and a few nodes of high degree. In our scheme, the nodes can have a maximum degree @math , where @math is an integer chosen according to the application. The second is in the security of wireless sensor networks. We propose a new key predistribution scheme based on the above model. The important features of this model are that the network is fully connected, it has fewer keys, has larger size of the giant component and lower average path length compared with traditional key predistribution schemes and comparable resilience to random node attacks. We argue that in many networks like key predistribution and Internet of Things, having nodes of very high degree will be a bottle-neck in communication. Thus, studying preferential attachment model with degree bound will open up new directions in the study of complex networks, and will have many applications in real world scenarios.
Yagan @cite_14 @cite_20 @cite_2 studied the key graphs formed by @cite_25 and @cite_18 under full and partial visibility. They showed that graphs defined in @cite_18 are connected with high probability. They also studied the security under the ON-OFF secure channel, which means that the links between nodes might or might not be active at a given instant.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_2", "@cite_25", "@cite_20" ], "mid": [ "", "2155003029", "2165958223", "2116269350", "2121278461" ], "abstract": [ "", "We consider the random graph induced by the random key predistribution scheme of Eschenauer and Gligor under the assumption of full visibility. We show the existence of a zero-one law for the absence of isolated nodes, and complement it by a Poisson convergence for the number of isolated nodes. Leveraging earlier results and analogies with Erdos-Renyi graphs, we explore similar results for the property of graph connectivity.", "We investigate the secure connectivity of wireless sensor networks under the random key distribution scheme of Eschenauer and Gligor. Unlike recent work which was carried out under the assumption of full visibility, here we assume a (simplified) communication model where unreliable wireless links are represented as on off channels. We present conditions on how to scale the model parameters so that the network: 1) has no secure node which is isolated and 2) is securely connected, both with high probability when the number of sensor nodes becomes large. The results are given in the form of full zero-one laws, and constitute the first complete analysis of the EG scheme under non-full visibility. Through simulations, these zero-one laws are shown to be valid also under a more realistic communication model (i.e., the disk model). The relations to the Gupta and Kumar's conjecture on the connectivity of geometric random graphs with randomly deleted edges are also discussed.", "Distributed Sensor Networks (DSNs) are ad-hoc mobile networks that include sensor nodes with limited computation and communication capabilities. DSNs are dynamic in the sense that they allow addition and deletion of sensor nodes after deployment to grow the network or replace failing and unreliable nodes. DSNs may be deployed in hostile areas where communication is monitored and nodes are subject to capture and surreptitious use by an adversary. Hence DSNs require cryptographic protection of communications, sensor-capture detection, key revocation and sensor disabling. In this paper, we present a key-management scheme designed to satisfy both operational and security requirements of DSNs. The scheme includes selective distribution and revocation of keys to sensor nodes as well as node re-keying without substantial computation and communication capabilities. It relies on probabilistic key sharing among the nodes of a random graph and uses simple protocols for shared-key discovery and path-key establishment, and for key revocation, re-keying, and incremental addition of nodes. The security and network connectivity characteristics supported by the key-management scheme are discussed and simulation experiments presented.", "We investigate the secure connectivity of wireless sensor networks under the random pairwise key predistribution scheme of Chan, Perrig, and Song. Unlike recent work carried out under the assumption of full visibility, here we assume a (simplified) communication model where unreliable wireless links are represented as independent on off channels. We present conditions on how to scale the model parameters so that the network 1) has no secure node that is isolated and 2) is securely connected, both with high probability, when the number of sensor nodes becomes large. The results are given in the form of zero-one laws, and exhibit significant differences with corresponding results in the full-visibility case. Through simulations, these zero-one laws are shown to also hold under a more realistic communication model, namely the disk model." ] }
1604.00942
2949438658
Air travel is one of the most frequently used means of transportation in our every-day life. Thus, it is not surprising that an increasing number of travelers share their experiences with airlines and airports in form of online reviews on the Web. In this work, we thrive to explain and uncover the features of airline reviews that contribute most to traveler satisfaction. To that end, we examine reviews crawled from the Skytrax air travel review portal. Skytrax provides four review categories to review airports, lounges, airlines and seats. Each review category consists of several five-star ratings as well as free-text review content. In this paper, we conducted a comprehensive feature study and we find that not only five-star rating information such as airport queuing time and lounge comfort highly correlate with traveler satisfaction but also textual features in the form of the inferred review text sentiment. Based on our findings, we created classifiers to predict traveler satisfaction using the best performing rating features. Our results reveal that given our methodology, traveler satisfaction can be predicted with high accuracy. Additionally, we find that training a model on the sentiment of the review text provides a competitive alternative when no five star rating information is available. We believe that our work is of interest for researchers in the area of modeling and predicting user satisfaction based on available review data on the Web.
Since @cite_18 established a relationship between traveler satisfaction and profitability, research on the airline service quality has become an important issue for the airline industry. As a consequence, the authors of @cite_3 claim that it is crucial to continuously collect and evaluate data about traveler satisfaction and how it relates to the provided service quality in order to be competitive in the airline industry. However, most work that conduct research in airline service quality rely on gathered offline data coming from on-site questionnaires @cite_14 @cite_17 , airline submissions @cite_13 or in-depth interviews @cite_15 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_3", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "1574655732", "2036032986", "1568893380", "2045183447", "", "" ], "abstract": [ "Outstanding service organizations focus on customers and frontline workers. The service-profit chain puts hard values on soft measures, helping managers quantify their investments in people and then integrate those measures into a comprehensive service picture.", "This study aimed to examine the effects of aspects of airline service quality, such as airline tangibles, terminal tangibles, and empathy on levels of customer satisfaction. The relationship between these levels of satisfaction and the general perceptions about service quality were also investigated. An airline passenger survey was conducted among the population of the Federal Territory of Labuan, Malaysia. A total of 300 respondents who had regularly patronized either Malaysia Airlines or AirAsia over the last six months were selected via convenience sampling method. Empirical results via structural equation modeling (SEM) approach revealed that the relationship between customer satisfaction with airline service quality and ‘word-of-mouth’ recommendations is a consistent one. Furthermore, customer satisfaction is widely influenced by empathy, which is why flight punctuality and good transportation links between city venues and airports are prioritized by providers. Direction for future research is presented.", "Purpose – The purpose of this paper is to examine consumer perceptions of airline quality indicators and compare them to actual data reported by the Department of Transportation, in the USA and the Association of European Airlines (AEA) in the EU. The objective is to determine whether there is a discrepancy between reported performance metrics of service quality and consumer perception.Design methodology approach – This paper compares actual reported data on service quality with results of an exploratory questionnaire on the perceived frequency of service failures in three key areas of airline service quality; on time flight arrivals, baggage reports and flight cancellations. Similarities and differences both within and between the USA and EU markets are discussed.Findings – Preliminary findings indicate that actual consumer perceptions of airline performance on key areas of airline service quality are in fact far worse than the data reported in the US Air Travel Consumer Report or AEA Consumer Report. Co...", "This paper examines the key factors that determine business traveler loyalty toward full-service airlines in China. Based on literature review and panel interview, ten airline attributes under three categories were derived: (a) operational factors: safety, punctuality, and aircraft; (b) competitive factors: frequency of flights, schedule, frequent flyer program, ticket price, and reputation; and (c) attractive factors: in flight food & drinks and in flight staff service. We surveyed 2000 Chinese business travelers on domestic flights, obtaining 462 usable questionnaires. Hierarchical regression analysis reveals that reputation, in-flight service, frequent flyer program, and aircraft have the greatest influence in driving airline loyalty.", "", "" ] }
1604.00942
2949438658
Air travel is one of the most frequently used means of transportation in our every-day life. Thus, it is not surprising that an increasing number of travelers share their experiences with airlines and airports in form of online reviews on the Web. In this work, we thrive to explain and uncover the features of airline reviews that contribute most to traveler satisfaction. To that end, we examine reviews crawled from the Skytrax air travel review portal. Skytrax provides four review categories to review airports, lounges, airlines and seats. Each review category consists of several five-star ratings as well as free-text review content. In this paper, we conducted a comprehensive feature study and we find that not only five-star rating information such as airport queuing time and lounge comfort highly correlate with traveler satisfaction but also textual features in the form of the inferred review text sentiment. Based on our findings, we created classifiers to predict traveler satisfaction using the best performing rating features. Our results reveal that given our methodology, traveler satisfaction can be predicted with high accuracy. Additionally, we find that training a model on the sentiment of the review text provides a competitive alternative when no five star rating information is available. We believe that our work is of interest for researchers in the area of modeling and predicting user satisfaction based on available review data on the Web.
One recent work going into that direction is the one described in @cite_2 , in which the authors mined review data about airlines' in-flight services from the Skytrax portal. By grouping travelers via feature-based and clustering-based modelling, the authors showed that inferences can be captured to explain how travelers evaluate in-flight services. Another recent work of @cite_1 presented a research framework to extract and explore information on a user's opinion about airline service features from a large static corpus of online review texts.
{ "cite_N": [ "@cite_1", "@cite_2" ], "mid": [ "1566516566", "2068714596" ], "abstract": [ "In this work, we propose a research framework to exploring the useful information about airline service from massive online reviews, especially, the airline service features from the customer perspective. The experimental results indicate that the proposed methods can extract information about customers' opinion about the airline service features. The common concerns as well as special features for different airline company can also be extracted efficiently from the massive online review data.", "Many organizations today have more than very large databases; they have databases that grow without limit at a rate of several million records per day. Mining these continuous data streams brings unique opportunities, but also new challenges. This paper describes and evaluates VFDT, an anytime system that builds decision trees using constant memory and constant time per example. VFDT can incorporate tens of thousands of examples per second using off-the-shelf hardware. It uses Hoeffding bounds to guarantee that its output is asymptotically nearly identical to that of a conventional learner. We study VFDT's properties and demonstrate its utility through an extensive set of experiments on synthetic data. We apply VFDT to mining the continuous stream of Web access data from the whole University of Washington main campus." ] }
1604.00942
2949438658
Air travel is one of the most frequently used means of transportation in our every-day life. Thus, it is not surprising that an increasing number of travelers share their experiences with airlines and airports in form of online reviews on the Web. In this work, we thrive to explain and uncover the features of airline reviews that contribute most to traveler satisfaction. To that end, we examine reviews crawled from the Skytrax air travel review portal. Skytrax provides four review categories to review airports, lounges, airlines and seats. Each review category consists of several five-star ratings as well as free-text review content. In this paper, we conducted a comprehensive feature study and we find that not only five-star rating information such as airport queuing time and lounge comfort highly correlate with traveler satisfaction but also textual features in the form of the inferred review text sentiment. Based on our findings, we created classifiers to predict traveler satisfaction using the best performing rating features. Our results reveal that given our methodology, traveler satisfaction can be predicted with high accuracy. Additionally, we find that training a model on the sentiment of the review text provides a competitive alternative when no five star rating information is available. We believe that our work is of interest for researchers in the area of modeling and predicting user satisfaction based on available review data on the Web.
In our work, we perform a comprehensive feature analysis using rating and textual features from airport, lounge, airline and seat reviews in order to explain which features actually contribute to traveler satisfaction. Moreover, we show how the different rating and textual features can be utilized to predict traveler satisfaction. Our methods and results provide practical insights on how to build upon work like @cite_1 in order to predict traveler satisfaction using online airline reviews.
{ "cite_N": [ "@cite_1" ], "mid": [ "1566516566" ], "abstract": [ "In this work, we propose a research framework to exploring the useful information about airline service from massive online reviews, especially, the airline service features from the customer perspective. The experimental results indicate that the proposed methods can extract information about customers' opinion about the airline service features. The common concerns as well as special features for different airline company can also be extracted efficiently from the massive online review data." ] }
1604.00736
2318133070
This paper presents a data compression algorithm with error bound guarantee for wireless sensor networks (WSNs) using compressing neural networks. The proposed algorithm minimizes data congestion and reduces energy consumption by exploring spatio-temporal correlations among data samples. The adaptive rate-distortion feature balances the compressed data size (data rate) with the required error bound guarantee (distortion level). This compression relieves the strain on energy and bandwidth resources while collecting WSN data within tolerable error margins, thereby increasing the scale of WSNs. The algorithm is evaluated using real-world data sets and compared with conventional methods for temporal and spatial data compression. The experimental validation reveals that the proposed algorithm outperforms several existing WSN data compression methods in terms of compression efficiency and signal reconstruction. Moreover, an energy analysis shows that compressing the data can reduce the energy expenditure and, hence, expand the service lifespan by several folds.
We identify a wide variety of coding schemes in the literature (e.g., @cite_32 @cite_13 @cite_34 ) and discuss some important solutions for signal compression in WSNs in the following.
{ "cite_N": [ "@cite_34", "@cite_13", "@cite_32" ], "mid": [ "2100766019", "1669419348", "2059793602" ], "abstract": [ "Wireless sensor networks (WSNs) are highly resource constrained in terms of power supply, memory capacity, communication bandwidth, and processor performance. Compression of sampling, sensor data, and communications can significantly improve the efficiency of utilization of three of these resources, namely, power supply, memory and bandwidth. Recently, there have been a large number of proposals describing compression algorithms for WSNs. These proposals are diverse and involve different compression approaches. It is high time that these individual efforts are put into perspective and a more holistic view taken. In this article, we take a step in that direction by presenting a survey of the literature in the area of compression and compression frameworks in WSNs. A comparative study of the various approaches is also provided. In addition, open research issues, challenges and future research directions are highlighted.", "In the past few years, lossy compression has been widely applied in the field of wireless sensor networks (WSN), where energy efficiency is a crucial concern due to the constrained nature of the transmission devices. Often, the common thinking among researchers and implementers is that compression is always a good choice, because the major source of energy consumption in a sensor node comes from the transmission of the data. Lossy compression is deemed a viable solution as the imperfect reconstruction of the signal is often acceptable in WSN, subject to some application dependent maximum error tolerance. Nevertheless, this is seldom supported by quantitative evidence. In this paper, we thoroughly review a number of lossy compression methods from the literature, and analyze their performance in terms of compression efficiency, computational complexity and energy consumption. We consider two different scenarios, namely, wireless and underwater communications, and show that signal compression may or may not help in the reduction of the overall energy consumption, depending on factors such as the compression algorithm, the signal statistics and the hardware characteristics, i.e., micro-controller and transmission technology. The lesson that we have learned, is that signal compression may in fact provide some energy savings. However, its usage should be carefully evaluated, as in quite a few cases processing and transmission costs are of the same order of magnitude, whereas, in some other cases, the former may even dominate the latter. In this paper, we show quantitative comparisons to assess these tradeoffs in the above mentioned scenarios (i.e., wireless versus underwater). In addition, we consider recently proposed and lightweight algorithms such as Lightweight Temporal Compression (LTC) as well as more sophisticated FFT- or DCT-based schemes and show that the former are the best option in wireless settings, whereas the latter solutions are preferable for underwater networks. Finally, we provide formulas, obtained through numerical fittings, to gauge the computational complexity, the overall energy consumption and the signal representation accuracy of the best performing algorithms as a function of the most relevant system parameters.", "Power consumption is a critical problem affecting the lifetime of wireless sensor networks. A number of techniques have been proposed to solve this issue, such as energy-efficient medium access control or routing protocols. Among those proposed techniques, the data compression scheme is one that can be used to reduce transmitted data over wireless channels. This technique leads to a reduction in the required inter-node communication, which is the main power consumer in wireless sensor networks. In this article, a comprehensive review of existing data compression approaches in wireless sensor networks is provided. First, suitable sets of criteria are defined to classify existing techniques as well as to determine what practical data compression in wireless sensor networks should be. Next, the details of each classified compression category are described. Finally, their performance, open issues, limitations and suitable applications are analyzed and compared based on the criteria of practical data compression in wireless sensor networks." ] }
1604.00723
2292217501
This paper investigates the asymptotic and non-asymptotic behavior of the quantized primal dual algorithm in network utility maximization problems, in which a group of agents maximize the sum of their individual concave objective functions under linear constraints. In the asymptotic scenario, we use the information theoretic notion of differential entropy power to establish universal lower bounds on the exponential convergence rates of joint primal dual, primal and dual variables under optimum achieving quantization schemes. These results provide trade offs between the speed of exponential convergence, the agents objective functions, the communication bit rates, and the number of agents and constraints. In the non-asymptotic scenario, we obtain lower bounds on the mean square distance of joint primal dual, primal and dual variables from the optimal solution for any finite time instance. These bounds hold regardless of the quantization scheme used.
Although the performance of distributed optimization algorithms, and in particular NUM algorithms, under perfect communication networks is well understood, the investigation of the impact of imperfect communications on these optimization algorithms is relatively a new research area that has attracted much interests in recent years, see @cite_12 - @cite_8 .
{ "cite_N": [ "@cite_12", "@cite_8" ], "mid": [ "2107664305", "2091863325" ], "abstract": [ "We consider a convex unconstrained optimization problem that arises in a network of agents whose goal is to cooperatively optimize the sum of the individual agent objective functions through local computations and communications. For this problem, we use averaging algorithms to develop distributed subgradient methods that can operate over a time-varying topology. Our focus is on the convergence rate of these methods and the degradation in performance when only quantized information is available. Based on our recent results on the convergence time of distributed averaging algorithms, we derive improved upper bounds on the convergence rate of the unquantized subgradient method. We then propose a distributed subgradient method under the additional constraint that agents can only store and communicate quantized information, and we provide bounds on its convergence rate that highlight the dependence on the number of quantization levels.", "In this paper, we consider quantized distributed optimization problems with limited communication capacity and time-varying communication topology. A distributed quantized subgradient algorithm is presented with quantized information exchange between agents. Based on a proposed encoder-decoder scheme and a zooming-in technique, the optimal solution can be obtained without any quantization errors. Moreover, we explore how to minimize the quantization level number for quantized distributed optimization problems. In fact, the optimization problem can be solved with five-level quantizers in the switching topology case, while it can be solved with three-level quantizers in the fixed topology case." ] }
1604.00723
2292217501
This paper investigates the asymptotic and non-asymptotic behavior of the quantized primal dual algorithm in network utility maximization problems, in which a group of agents maximize the sum of their individual concave objective functions under linear constraints. In the asymptotic scenario, we use the information theoretic notion of differential entropy power to establish universal lower bounds on the exponential convergence rates of joint primal dual, primal and dual variables under optimum achieving quantization schemes. These results provide trade offs between the speed of exponential convergence, the agents objective functions, the communication bit rates, and the number of agents and constraints. In the non-asymptotic scenario, we obtain lower bounds on the mean square distance of joint primal dual, primal and dual variables from the optimal solution for any finite time instance. These bounds hold regardless of the quantization scheme used.
In @cite_0 , we studied the convergence behavior of the PD algorithm in a quadratic NUM problem under quantized communications. In the current paper, the objective functions of agents belong to the class of concave and twice continuously differentiable functions. This complicates our analysis as the PD update rule becomes non-linear in primal variables. Here, we study the impact of quantized communications on the convergence behavior of variables in both regimes.
{ "cite_N": [ "@cite_0" ], "mid": [ "2962776448" ], "abstract": [ "This paper examines the effect of quantized communications on the convergence behavior of the primal-dual algorithm in quadratic network utility maximization problems with linear equality constraints. In our set-up, it is assumed that the primal variables are updated by individual agents, whereas the dual variables are updated by a central entity, called system, which has access to the parameters quantifying the system-wide constraints. The notion of differential entropy power is used to establish a universal lower bound on the rate of exponential mean square convergence of the primal-dual algorithm under quantized message passing between agents and the system. The lower bound is controlled by the average aggregate data rate under the quantization, the curvature of the utility functions of agents, the number of agents and the number of constraints. An adaptive quantization scheme is proposed under which the primal-dual algorithm converges to the optimal solution despite quantized communications between agents and the system. Finally, the rate of exponential convergence of the primal-dual algorithm under the proposed quantization scheme is numerically studied." ] }
1604.00830
2950122030
Knowledge about the roles developers play in a software project is crucial to understanding the project's collaborative dynamics. Developers are often classified according to the dichotomy of core and peripheral roles. Typically, operationalizations based on simple counts of developer activities (e.g., number of commits) are used for this purpose, but there is concern regarding their validity and ability to elicit meaningful insights. To shed light on this issue, we investigate whether commonly used operationalizations of core--peripheral roles produce consistent results, and we validate them with respect to developers' perceptions by surveying 166 developers. Improving over the state of the art, we propose a relational perspective on developer roles, using developer networks to model the organizational structure, and by examining core--peripheral roles in terms of developers' positions and stability within the organizational structure. In a study of 10 substantial open-source projects, we found that the existing and our proposed core--peripheral operationalizations are largely consistent and valid. Furthermore, we demonstrate that a relational perspective can reveal further meaningful insights, such as that core developers exhibit high positional stability, upper positions in the hierarchy, and high levels of coordination with other core developers.
Many of the aforementioned studies applied empirical methods based on interviews, questionnaires, personal experience reports, and manual inspections of data archives to identify characteristics of core and peripheral developers. An alternative line of research has attempted to operationalize core and peripheral developers using data available in software repositories, such as version-control systems @cite_13 @cite_12 @cite_24 @cite_21 @cite_18 @cite_10 , bug trackers @cite_19 , and mailing lists @cite_18 @cite_29 . By operationalizing the notion of core and peripheral developers, these studies have taken important steps towards gaining insight that is not attainable with (more) manual approaches, including evaluating and basing conclusions on results from hundreds of projects @cite_7 .
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_10", "@cite_29", "@cite_21", "@cite_24", "@cite_19", "@cite_13", "@cite_12" ], "mid": [ "33618601", "1965110090", "1998407227", "2112357885", "1924865473", "2110504075", "2099162911", "2107294940", "2031640601" ], "abstract": [ "The software architecture of a software system and the coordination efforts necessary to create such system are intrinsically related. Making changes to components that a large number of other components rely on, the technical core, is usually difficult due to the complexity of the coordination of all involved developers. However, a distinct group of developers effectively help evolving the technical core of software projects. This group of developers is called key developers. In this paper we describe a case study involving the Apache Ant project aimed to identify and characterize key developers in terms of their volume of contribution and social participation. Our results indicated that only 25 of the developers may be considered as key developers. Results also showed that key developers are often active in the developers' mailing list and often fulfilled the coordination requirements that emerged from their development tasks. Finally, we observed that the set of key developers was indistinguishable from the set of top contributors. We expect that this characterization enables further exploration over contribution patterns and the establishment of profiles of FLOSS key developers.", "Metaphors, such as the Cathedral and Bazaar, used to describe the organization of FLOSS projects typically place them in sharp contrast to proprietary development by emphasizing FLOSS’s distinctive social and communications structures. But what do we really know about the communication patterns of FLOSS projects? How generalizable are the projects that have been studied? Is there consistency across FLOSS projects? Questioning the assumption of distinctiveness is important because practitioner–advocates from within the FLOSS community rely on features of social structure to describe and account for some of the advantages of FLOSS production. @PARASPLIT To address this question, we examined 120 project teams from SourceForge, representing a wide range of FLOSS project types, for their communications centralization as revealed in the interactions in the bug tracking system. We found that FLOSS development teams vary widely in their communications centralization, from projects completely centered on one developer to projects that are highly decentralized and exhibit a distributed pattern of conversation between developers and active users. @PARASPLIT We suggest, therefore, that it is wrong to assume that FLOSS projects are distinguished by a particular social structure merely because they are FLOSS. Our findings suggest that FLOSS projects might have to work hard to achieve the expected development advantages which have been assumed to flow from \"going open.\" In addition, the variation in communications structure across projects means that communications centralization is useful for comparisons between FLOSS teams. We found that larger FLOSS teams tend to have more decentralized communication patterns, a finding that suggests interesting avenues for further research examining, for example, the relationship between communications structure and code modularity.", "In distributed software development, two sorts of dependencies can arise. The structure of the software system itself can create dependencies between software elements, while the structure of the development process can create dependencies between software developers. Each of these both shapes and reflects the development process. Our research concerns the extent to which, by looking uniformly at artifacts and activities, we can uncover the structures of software projects, and the ways in which development processes are inscribed into software artifacts. We show how a range of organizational processes and arrangements can be uncovered in software repositories, with implications for collaborative work in large distributed groups such as open source communities.", "Open source software is built by teams of volunteers. Each project has a core team of developers, who have the authority to commit changes to the repository; this team is the elite, committed foundation of the project, selected through a meritocratic process from a larger number of people who participate on the mailing list. Most projects carefully regulate admission of outsiders to full developer privileges; some projects even have formal descriptions of this process. Understanding the factors that influence the \"who, how and when\" of this process is critical, both for the sustainability of FLOSS projects, and for outside stakeholders who want to gain entry and succeed. In this paper we mount a quantitative case study of the process by which people join FLOSS projects, using data mined from the Apache web server, Postgres, and Python. We develop a theory of open source project joining, and evaluate this theory based on our data.", "A common problem that management faces in software companies is the high instability of their staff. In libre (free, open source) software projects, the permanence of developers is also an open issue, with the potential of causing problems amplified by the self-organizing nature that most of them exhibit. Hence, human resources in libre software projects are even more difficult to manage: developers are in most cases not bound by a contract and, in addition, there is not a real management structure concerned about this problem. This raises some interesting questions with respect to the composition of development teams in libre software projects, and how they evolve over time. There are projects lead by their original founders (some sort of “code gods”), while others are driven by several different developer groups over time (i.e. the project “regenerates” itself). In this paper, we propose a quantitative methodology, based on the analysis of the activity in the source code management repositories, to study how these processes (developers leaving, developers joining) affect libre software projects. The basis of it is the analysis of the composition of the core group, the group of developers most active in a project, for several time lapses. We will apply this methodology to several large, well-known libre software projects, and show how it can be used to characterize them. In addition, we will discuss the lessons that can be learned, and the validity of our proposal.", "In many libre (free, open source) software projects, most of the development is performed by a relatively small number of persons, the “core team”. The stability and permanence of this group of most active developers is of great importance for the evolution and sustainability of the project. In this position paper we propose a quantitative methodology to study the evolution of core teams by analyzing information from source code management repositories. The most active developers in different periods are identified, and their activity is calculated over time, looking for core team evolution patterns.", "The concept of the core group of developers is important and often discussed in empirical studies of FLOSS projects. This paper examines the question, \"how does one empirically distinguish the core?\" Being able to identify the core members of a FLOSS development project is important because many of the processes necessary for successful projects likely involve core members differently than peripheral members, so analyses that mix the two groups will likely yield invalid results. We compare 3 analysis approaches to identify the core: the named list of developers, a Bradford’s law analysis that takes as the core the most frequent contributors and a social network analysis of the interaction pattern that identifies the core in a core-and-periphery structure. We apply these measures to the interactions around bug fixing for 116 SourceForge projects. The 3 techniques identify different individuals as core members; examination of which individuals are identified leads to suggestions for refining the measures. All 3 measures though suggest that the core of FLOSS projects is a small fraction of the total number of contributors.", "According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine data from two major open source projects, the Apache web server and the Mozilla browser. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution intervals for these OSS projects. We develop several hypotheses by comparing the Apache project with several commercial projects. We then test and refine several of these hypotheses, based on an analysis of Mozilla data. We conclude with thoughts about the prospects for high-performance commercial open source process hybrids.", "Background: Several factors may impact the process of software maintenance and evolution of free software projects, including structural complexity and lack of control over its contributors. Structural complexity, an architectural concern, makes software projects more difficult to understand, and consequently more difficult to maintain and evolve. The contributors in a free software project exhibit different levels of participation in the project, and can be categorized as core and peripheral developers. Research aim: This research aims at characterising the changes made to the source code of 7 web server projects written in C with respect to the amount of structural complexity added or removed and the developer level of participation. Method: We performed a observational study with historical data collected from the version control repositories of those projects, recording structural complexity information for each change as well as identifying each change as performed by a core or a peripheral developer. Results and conclusions: We have found that core developers introduce less structural complexity than peripheral developers in general, and that in the case of complexity-reducing activities, core developers remove more structural complexity than peripheral developers. These results demonstrate the importance of having a stable and healthy core team to the sustainability of free software projects." ] }
1604.00036
2321826354
In this paper we present a hierarchical method to discover mid-level elements with the objective of modeling visual compatibility between objects. At the base-level, our method identifies patterns of CNN activations with the aim of modeling different variations styles in which objects of the classes of interest may occur. At the top-level, the proposed method discovers patterns of co-occurring activations of base-level elements that define visual compatibility between pairs of object classes. Experiments on the massive Amazon dataset show the strength of our method at describing object classes and the characteristics that drive the compatibility between them.
In recent years, mid-level visual elements have been proposed as a bottom-up means to represent visual information. These mid-level elements are both representative, i.e. they can be adapted to describe different images, and discriminative, i.e. they can be detected with high precision and recall. Given these properties, mid-level visual elements have proven to be very effective for several computer vision tasks such as image classification @cite_28 @cite_8 @cite_3 , action recognition @cite_24 @cite_26 , geometry estimation @cite_7 , etc. More recently, mid-level visual elements were successfully applied to summarize large sets of images @cite_18 . In addition, their performance was further improved by exploiting more advanced features at the lower level, combinations of activations of Convolutional Neural Networks, and association rule mining @cite_10 . In this work we propose a hierarchical method based on mid-level elements to model visual compatibility between objects depicted in images. At the base-level, our method gets closer to existing work by extracting mid-level elements in a class-specific fashion. Different from existing work at the top-level, our method exploits co-occurrences of base-level elements between images of compatible objects in order to discover a set of patterns or rules that link such objects.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_7", "@cite_8", "@cite_28", "@cite_3", "@cite_24", "@cite_10" ], "mid": [ "", "2139594308", "2146814781", "2107698128", "12634471", "1590510366", "1998605630", "2952540894" ], "abstract": [ "", "This paper proposes motionlet, a mid-level and spatiotemporal part, for human motion recognition. Motion let can be seen as a tight cluster in motion and appearance space, corresponding to the moving process of different body parts. We postulate three key properties of motion let for action recognition: high motion saliency, multiple scale representation, and representative-discriminative ability. Towards this goal, we develop a data-driven approach to learn motion lets from training videos. First, we extract 3D regions with high motion saliency. Then we cluster these regions and preserve the centers as candidate templates for motion let. Finally, we examine the representative and discriminative power of the candidates, and introduce a greedy method to select effective candidates. With motion lets, we present a mid-level representation for video, called motionlet activation vector. We conduct experiments on three datasets, KTH, HMDB51, and UCF50. The results show that the proposed methods significantly outperform state-of-the-art methods.", "What primitives should we use to infer the rich 3D world behind an image? We argue that these primitives should be both visually discriminative and geometrically informative and we present a technique for discovering such primitives. We demonstrate the utility of our primitives by using them to infer 3D surface normals given a single image. Our technique substantially outperforms the state-of-the-art and shows improved cross-dataset performance.", "Obtaining effective mid-level representations has become an increasingly important task in computer vision. In this paper, we propose a fully automatic algorithm which harvests visual concepts from a large number of Internet images (more than a quarter of a million) using text-based queries. Existing approaches to visual concept learning from Internet images either rely on strong supervision with detailed manual annotations or learn image-level classifiers only. Here, we take the advantage of having massive well organized Google and Bing image data, visual concepts (around 14, 000) are automatically exploited from images using word-based queries. Using the learned visual concepts, we show state-of-the-art performances on a variety of benchmark datasets, which demonstrate the effectiveness of the learned mid-level representations: being able to generalize well to general natural images. Our method shows significant improvement over the competing systems in image classification, including those with strong supervision.", "In this paper we address the problem of automatically recognizing pictured dishes. To this end, we introduce a novel method to mine discriminative parts using Random Forests (rf), which allows us to mine for parts simultaneously for all classes and to share knowledge among them. To improve efficiency of mining and classification, we only consider patches that are aligned with image superpixels, which we call components. To measure the performance of our rf component mining for food recognition, we introduce a novel and challenging dataset of 101 food categories, with 101’000 images. With an average accuracy of 50.76 , our model outperforms alternative classification methods except for cnn, including svm classification on Improved Fisher Vectors and existing discriminative part-mining algorithms by 11.88 and 8.13 , respectively. On the challenging mit-Indoor dataset, our method compares nicely to other s-o-a component-based classification methods.", "The goal of this paper is to discover a set of discriminative patches which can serve as a fully unsupervised mid-level visual representation. The desired patches need to satisfy two requirements: 1) to be representative, they need to occur frequently enough in the visual world; 2) to be discriminative, they need to be different enough from the rest of the visual world. The patches could correspond to parts, objects, \"visual phrases\", etc. but are not restricted to be any one of them. We pose this as an unsupervised discriminative clustering problem on a huge dataset of image patches. We use an iterative procedure which alternates between clustering and training discriminative classifiers, while applying careful cross-validation at each step to prevent overfitting. The paper experimentally demonstrates the effectiveness of discriminative patches as an unsupervised mid-level visual representation, suggesting that it could be used in place of visual words for many tasks. Furthermore, discriminative patches can also be used in a supervised regime, such as scene classification, where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset.", "How should a video be represented? We propose a new representation for videos based on mid-level discriminative spatio-temporal patches. These spatio-temporal patches might correspond to a primitive human action, a semantic object, or perhaps a random but informative spatio-temporal patch in the video. What defines these spatio-temporal patches is their discriminative and representative properties. We automatically mine these patches from hundreds of training videos and experimentally demonstrate that these patches establish correspondence across videos and align the videos for label transfer techniques. Furthermore, these patches can be used as a discriminative vocabulary for action classification where they demonstrate state-of-the-art performance on UCF50 and Olympics datasets.", "The purpose of mid-level visual element discovery is to find clusters of image patches that are both representative and discriminative. Here we study this problem from the prospective of pattern mining while relying on the recently popularized Convolutional Neural Networks (CNNs). We observe that a fully-connected CNN activation extracted from an image patch typically possesses two appealing properties that enable its seamless integration with pattern mining techniques. The marriage between CNN activations and association rule mining, a well-known pattern mining technique in the literature, leads to fast and effective discovery of representative and discriminative patterns from a huge number of image patches. When we retrieve and visualize image patches with the same pattern, surprisingly, they are not only visually similar but also semantically consistent, and thus give rise to a mid-level visual element in our work. Given the patterns and retrieved mid-level visual elements, we propose two methods to generate image feature representations for each. The first method is to use the patterns as codewords in a dictionary, similar to the Bag-of-Visual-Words model, we compute a Bag-of-Patterns representation. The second one relies on the retrieved mid-level visual elements to construct a Bag-of-Elements representation. We evaluate the two encoding methods on scene and object classification tasks, and demonstrate that our approach outperforms or matches recent works using CNN activations for these tasks." ] }
1604.00147
2315856743
This paper presents a novel method for learning a pose lexicon comprising semantic poses defined by textual instructions and their associated visual poses defined by visual features. The proposed method simultaneously takes two input streams, semantic poses and visual pose candidates, and statistically learns a mapping between them to construct the lexicon. With the learned lexicon, action recognition can be cast as the problem of finding the maximum translation probability of a sequence of semantic poses given a stream of visual pose candidates. Experiments evaluating pre-trained and zero-shot action recognition conducted on MSRC-12 gesture and WorkoutSu-10 exercise datasets were used to verify the efficacy of the proposed method.
Despite the good progress made in action recognition over the past decade, few studies have reported methods based on semantic learning. Earlier methods bridged the semantic gap using mid-level features (eg. visual keywords) @cite_17 ) obtained by quantizing low-level spatio-temporal features which form visual vocabulary. However, mid-level features are not sufficiently robust to obtain good performance on relatively large action dataset. This problem has been addressed by proposing high-level latent semantic features to represent semantically similar mid-level features. Unsupervised methods @cite_24 @cite_16 were previously applied for learning latent semantics based on topic models; example include probabilistic latent semantic analysis @cite_27 and latent Dirichlet allocation (LDA) @cite_6 . Recently, multiple layers model @cite_14 based on LDA was proposed for learning local and global action semantics. The intuitive basis of using mid- and high-level latent semantic features is that frequently co-occurring low-level features are correlated at some conceptual level. It is noteworthy that these two kinds of semantic features have no explicit semantic relationship to the problem; a situation different from the proposed semantic poses.
{ "cite_N": [ "@cite_14", "@cite_6", "@cite_24", "@cite_27", "@cite_16", "@cite_17" ], "mid": [ "1967033562", "1880262756", "2158169396", "2134731454", "2130843763", "2142194269" ], "abstract": [ "Inspired by the recent success of hierarchical representation, we propose a new hierarchical variant of latent Dirichlet allocation (h-LDA) for action recognition. The model consists of an appearance group and a motion group, and we introduce a new hierarchical structure including two-layer topics in each group to learn the spatial temporal patterns (STPs) of human actions. The basic idea is that the two-layer topics are used to model the global STPs and the local STPs of the actions respectively. Two groups of discrete words are generated from two complementary kinds of features for each group. Each topic learned in these two groups is used to describe a particular aspect of the actions. Specifically, the mid-level topics are learned to describe the local STPs by including the geometric structure information in the lower-level words. The top-level topics are learned from the mid-level topics and are the mixture distribution of the local STPs, which makes the top-level topics appropriate to represent the global STPs. In addition, we give the learning and inference process by Gibbs sampling with reasonable assumptions. Finally, each sample is discriminatively represented as the probabilistic distribution over the global STPs learned by the proposed h-LDA. Experimental results on two datasets demonstrate the effectiveness of our approach for action recognition.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.", "We present a novel unsupervised learning method for human action categories. A video sequence is represented as a collection of spatial-temporal words by extracting space-time interest points. The algorithm automatically learns the probability distributions of the spatial-temporal words and the intermediate topics corresponding to human action categories. This is achieved by using latent topic models such as the probabilistic Latent Semantic Analysis (pLSA) model and Latent Dirichlet Allocation (LDA). Our approach can handle noisy feature points arisen from dynamic background and moving cameras due to the application of the probabilistic models. Given a novel video sequence, the algorithm can categorize and localize the human action(s) contained in the video. We test our algorithm on three challenging datasets: the KTH human motion dataset, the Weizmann human action dataset, and a recent dataset of figure skating actions. Our results reflect the promise of such a simple approach. In addition, our algorithm can recognize and localize multiple actions in long and complex video sequences containing multiple motions.", "This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to the latter method which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed technique uses a generative latent class model to perform a probabilistic mixture decomposition. This results in a more principled approach with a solid foundation in statistical inference. More precisely, we propose to make use of a temperature controlled version of the Expectation Maximization algorithm for model fitting, which has shown excellent performance in practice. Probabilistic Latent Semantic Analysis has many applications, most prominently in information retrieval, natural language processing, machine learning from text, and in related areas. The paper presents perplexity results for different types of text and linguistic data collections and discusses an application in automated document indexing. The experiments indicate substantial and consistent improvements of the probabilistic method over standard Latent Semantic Analysis.", "We propose two new models for human action recognition from video sequences using topic models. Video sequences are represented by a novel \"bag-of-words\" representation, where each frame corresponds to a \"word\". Our models differ from previous latent topic models for visual recognition in two major aspects: first of all, the latent topics in our models directly correspond to class labels; second, some of the latent variables in previous topic models become observed in our case. Our models have several advantages over other latent topic models used in visual recognition. First of all, the training is much easier due to the decoupling of the model parameters. Second, it alleviates the issue of how to choose the appropriate number of latent topics. Third, it achieves much better performance by utilizing the information provided by the class labels in the training set. We present action classification results on five different data sets. Our results are either comparable to, or significantly better than previously published results on these data sets.", "The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results." ] }
1604.00133
2319888919
The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across various CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin.
In most cases, features from the fully connected (FC) layers are preferred, with sporadic reports on intermediate features. For the former, good examples include Regions with Convolutional Neural Network Features'' (R-CNN) @cite_32 , CNN baselines for recognition @cite_5 , Neural Codes @cite_41 , . The prevalent usage of FC features is mainly attributed to its strong generalization and semantics-descriptive ability. Regarding intermediate features, on the other hand, results of He @cite_40 on the Caltech-101 dataset @cite_34 suggest that the Conv5 feature is superior if Spatial Pyramid Pooling (SPP) is used, and is inferior to FC6 if no pooling step is taken. Xu @cite_30 find that the VLAD @cite_51 encoded Conv5 features produce higher accuracy on the MEDTest14 dataset in event detection. In @cite_36 , Ng observe that better performance in image search appears with intermediate layers of GoogLeNet @cite_53 and VGGNet @cite_19 when VLAD encoding is used. This paper demonstrates the competitiveness of Conv5 with FC features using simple pooling techniques. In a contemporary work, Mousavian @cite_35 draw similar insights in image search. Our works is carried out independently and provides a comprehensive evaluation of intermediate features on both image search and classification.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_41", "@cite_36", "@cite_53", "@cite_32", "@cite_19", "@cite_40", "@cite_5", "@cite_34", "@cite_51" ], "mid": [ "1950136256", "2181199739", "204268067", "2949266290", "2950179405", "2102605133", "1686810756", "2179352600", "2953391683", "2115733720", "2124509324" ], "abstract": [ "In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkits. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6 to 36.8 for the TRECVID MEDTest 14 dataset and from 34.0 to 44.6 for the TRECVID MEDTest 13 dataset.", "Several recent approaches showed how the representations learned by Convolutional Neural Networks can be repurposed for novel tasks. Most commonly it has been shown that the activation features of the last fully connected layers (fc7 or fc6) of the network, followed by a linear classifier outperform the state-of-the-art on several recognition challenge datasets. Instead of recognition, this paper focuses on the image retrieval problem and proposes a examines alternative pooling strategies derived for CNN features. The presented scheme uses the features maps from an earlier layer 5 of the CNN architecture, which has been shown to preserve coarse spatial information and is semantically meaningful. We examine several pooling strategies and demonstrate superior performance on the image retrieval task (INRIA Holidays) at the fraction of the computational cost, while using a relatively small memory requirements. In addition to retrieval, we see similar efficiency gains on the SUN397 scene categorization dataset, demonstrating wide applicability of this simple strategy. We also introduce and evaluate a novel GeoPlaces5K dataset from different geographical locations in the world for image retrieval that stresses more dramatic changes in appearance and viewpoint.", "It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time.", "Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks.", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.", "Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the network which was trained to perform object classification on ILSVRC13. We use features extracted from the network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or @math distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.", "Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.", "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors." ] }
1604.00133
2319888919
The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across various CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin.
The integration of multi-scale cues has been proven to bring decent improvements. A well-known example consists in Spatial Pyramid Matching (SPM) @cite_26 , which pools Bag-of-Words vectors from multiple image scales to form a long vector. In works associated with CNN, Gong @cite_28 pool CNN features extracted from multi-scale image patches for image search and classification. Yoo @cite_46 propose to encode dense activations of the FC layers with Fisher Vector @cite_37 from multiple image scales for image classification. In image segmentation, Farabet @cite_52 concatenate CNN features from the same CNN layer but from multiple scales of the image, so the features have similar invariance. The closest work to ours is the Hypercolumn'' @cite_57 , in which Hariharan address object segmentation and pose estimation by resizing convolutional maps in each layer and concatenating them on each pixel position. This paper instead focuses on holistic image recognition by fusing pooled vectors from various layers in the CNN structure, and demonstrates consistent improvement on the benchmarks.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_28", "@cite_52", "@cite_57", "@cite_46" ], "mid": [ "2147238549", "", "1524680991", "2022508996", "2953040121", "1932481952" ], "abstract": [ "Within the field of pattern classification, the Fisher kernel is a powerful framework which combines the strengths of generative and discriminative approaches. The idea is to characterize a signal with a gradient vector derived from a generative probability model and to subsequently feed this representation to a discriminative classifier. We propose to apply this framework to image categorization where the input signals are images and where the underlying generative model is a visual vocabulary: a Gaussian mixture model which approximates the distribution of low-level features in images. We show that Fisher kernels can actually be understood as an extension of the popular bag-of-visterms. Our approach demonstrates excellent performance on two challenging databases: an in-house database of 19 object scene categories and the recently released VOC 2006 database. It is also very practical: it has low computational needs both at training and test time and vocabularies trained on one set of categories can be applied to another set without any significant loss in performance.", "", "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets.", "Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.", "Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as feature representation. However, the information in this layer may be too coarse to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation[22], where we improve state-of-the-art from 49.7[22] mean AP^r to 60.0, keypoint localization, where we get a 3.3 point boost over[20] and part labeling, where we show a 6.6 point gain over a strong baseline.", "Compared to image representation based on low-level local descriptors, deep neural activations of Convolutional Neural Networks (CNNs) are richer in mid-level representation, but poorer in geometric invariance properties. In this paper, we present a straightforward framework for better image representation by combining the two approaches. To take advantages of both representations, we extract a fair amount of multi-scale dense local activations from a pre-trained CNN. We then aggregate the activations by Fisher kernel framework, which has been modified with a simple scale-wise normalization essential to make it suitable for CNN activations. Our representation demonstrates new state-of-the-art performances on three public datasets: 80.78 (Acc.) on MIT Indoor 67, 83.20 (mAP) on PASCAL VOC 2007 and 91.28 (Acc.) on Oxford 102 Flowers. The results suggest that our proposal can be used as a primary image representation for better performances in wide visual recognition tasks." ] }
1604.00133
2319888919
The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across various CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin.
In , CNN can be used as global @cite_41 @cite_24 @cite_6 , regional @cite_28 @cite_8 , or local features @cite_36 . Basically in image search, CNN features are required to be memory efficient, so encoding or hashing schemes are beneficial. Compared with traditional search framework @cite_42 based on the SIFT descriptor @cite_56 and the inverted index, CNN feature is more flexible and yields superior results @cite_22 . One problem of CNN feature is the lack of invariance to rotation, occlusion, truncation. So its usage as local feature @cite_36 or the inclusion of rotated patches @cite_12 are good choices against these limitations. On the other hand, has been greatly advanced by the introduction of CNN @cite_1 @cite_6 . CNN features are mainly used as global @cite_1 or part @cite_14 descriptors, and are shown to outperform classic hand-crafted ones in both transfer and non-transfer classification tasks. While baseline results with FC features have been reported on small- or medium-sized datasets @cite_5 , a detailed evaluation of different layers as well as their combination is still lacking. This paper aims at filling this gap by conducting comprehensive empirical studies.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_8", "@cite_41", "@cite_28", "@cite_36", "@cite_42", "@cite_1", "@cite_6", "@cite_56", "@cite_24", "@cite_5", "@cite_12" ], "mid": [ "", "2164022341", "", "204268067", "1524680991", "2949266290", "", "", "", "2151103935", "", "2953391683", "" ], "abstract": [ "", "", "", "It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time.", "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets.", "Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks.", "", "", "", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "", "Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the network which was trained to perform object classification on ILSVRC13. We use features extracted from the network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or @math distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.", "" ] }
1604.00192
2318192700
This paper presents a new method of singing voice analysis that performs mutually-dependent singing voice separation and vocal fundamental frequency F0 estimation. Vocal F0 estimation is considered to become easier if singing voices can be separated from a music audio signal, and vocal F0 contours are useful for singing voice separation. This calls for an approach that improves the performance of each of these tasks by using the results of the other. The proposed method first performs robust principal component analysis RPCA for roughly extracting singing voices from a target music audio signal. The F0 contour of the main melody is then estimated from the separated singing voices by finding the optimal temporal path over an F0 saliency spectrogram. Finally, the singing voices are separated again more accurately by combining a conventional time-frequency mask given by RPCA with another mask that passes only the harmonic structures of the estimated F0s. Experimental results showed that the proposed method significantly improved the performances of both singing voice separation and vocal F0 estimation. The proposed method also outperformed all the other methods of singing voice separation submitted to an international music analysis competition called MIREX 2014.
A typical approach to vocal F0 estimation is to identify F0s that have predominant harmonic structures by using an F0 saliency spectrogram that represents how likely the F0 is to exist in each time-frequency bin. A core of this approach is how to estimate a saliency spectrogram @cite_39 @cite_15 @cite_18 @cite_23 @cite_43 . Goto @cite_39 proposed a statistical multiple-F0 analyzer called PreFEst that approximates an observed spectrum as a superimposition of harmonic structures. Each harmonic structure is represented as a Gaussian mixture model (GMM) and the mixing weights of GMMs corresponding to different F0s can be regarded as a saliency spectrum. Rao @cite_15 tracked multiple candidates of vocal F0s including the F0s of locally predominant non-vocal sounds and then identified vocal F0s by focusing on the temporal instability of vocal components. Dressler @cite_18 attempted to reduce the number of possible overtones by identifying which overtones are derived from a vocal harmonic structure. Salamon @cite_43 proposed a heuristics-based method called MELODIA that focuses on the characteristics of vocal F0 contours. The contours of F0 candidates are obtained by using a saliency spectrogram based on subharmonic summation. This method achieved the state-of-the-art results in vocal F0 estimation.
{ "cite_N": [ "@cite_18", "@cite_39", "@cite_43", "@cite_23", "@cite_15" ], "mid": [ "2293399065", "2023800933", "1976069042", "1988844105", "2134039779" ], "abstract": [ "This paper proposes an efficient approach for the identification of the predominant voice from polyphonic musical audio. The algorithm implements an auditory streaming model which builds upon tone objects and salient pitches. The formation of voices is based on the regular update of the frequency and the magnitude of so called streaming agents, which aim at salient tones or pitches close to their preferred frequency range. Streaming agents which succeed to assemble a big magnitude start new voice objects, which in turn add adequate tones. The algorithm was evaluated as part of a melody extraction system during the MIREX audio melody extraction evaluation, where it gained very good results in the voicing detection and overall accuracy.", "Abstract In this paper, we describe the concept of music scene description and address the problem of detecting melody and bass lines in real-world audio signals containing the sounds of various instruments. Most previous pitch-estimation methods have had difficulty dealing with such complex music signals because these methods were designed to deal with mixtures of only a few sounds. To enable estimation of the fundamental frequency (F0) of the melody and bass lines, we propose a predominant-F0 estimation method called PreFEst that does not rely on the unreliable fundamental component and obtains the most predominant F0 supported by harmonics within an intentionally limited frequency range. This method estimates the relative dominance of every possible F0 (represented as a probability density function of the F0) by using MAP (maximum a posteriori probability) estimation and considers the F0’s temporal continuity by using a multiple-agent architecture. Experimental results with a set of ten music excerpts from compact-disc recordings showed that a real-time system implementing this method was able to detect melody and bass lines about 80 of the time these existed.", "We present a novel system for the automatic extraction of the main melody from polyphonic music recordings. Our approach is based on the creation and characterization of pitch contours, time continuous sequences of pitch candidates grouped using auditory streaming cues. We define a set of contour characteristics and show that by studying their distributions we can devise rules to distinguish between melodic and non-melodic contours. This leads to the development of new voicing detection, octave error minimization and melody selection techniques. A comparative evaluation of the proposed approach shows that it outperforms current state-of-the-art melody extraction systems in terms of overall accuracy. Further evaluation of the algorithm is provided in the form of a qualitative error analysis and the study of the effect of key parameters and algorithmic components on system performance. Finally, we conduct a glass ceiling analysis to study the current limitations of the method, and possible directions for future work are proposed.", "Extraction of predominant melody from the musical performances containing various instruments is one of the most challenging task in the field of music information retrieval and computational musicology. This paper presents a novel framework which estimates predominant vocal melody in real-time by tracking various sources with the help of harmonic clusters (combs) and then determining the predominant vocal source by using the harmonic strength of the source. The novel on-line harmonic comb tracking approach complies with both structural as well as temporal constraints simultaneously. It relies upon the strong higher harmonics for robustness against distortion of the first harmonic due to low frequency accompaniments, in contrast to the existing methods which track the pitch values. The predominant vocal source identification depends upon the novel idea of source dependant filtering of recognition score, which allows the algorithm to be implemented on-line. The proposed method, although on-line, is shown to significantly outperform our implementation of a state-of-the-art offline method for vocal melody extraction. Evaluations also show the reduction in octave error and the effectiveness of novel score filtering technique in enhancing the performance.", "Melody extraction algorithms for single-channel polyphonic music typically rely on the salience of the lead melodic instrument, considered here to be the singing voice. However the simultaneous presence of one or more pitched instruments in the polyphony can cause such a predominant-F0 tracker to switch between tracking the pitch of the voice and that of an instrument of comparable strength, resulting in reduced voice-pitch detection accuracy. We propose a system that, in addition to biasing the salience measure in favor of singing voice characteristics, acknowledges that the voice may not dominate the polyphony at all instants and therefore tracks an additional pitch to better deal with the potential presence of locally dominant pitched accompaniment. A feature based on the temporal instability of voice harmonics is used to finally identify the voice pitch. The proposed system is evaluated on test data that is representative of polyphonic music with strong pitched accompaniment. Results show that the proposed system is indeed able to recover melodic information lost to its single-pitch tracking counterpart, and also outperforms another state-of-the-art melody extraction system designed for polyphonic music." ] }
1604.00409
2950215975
We describe a special case of structure from motion where the camera rotates on a sphere. The camera's optical axis lies perpendicular to the sphere's surface. In this case, the camera's pose is minimally represented by three rotation parameters. From analysis of the epipolar geometry we derive a novel and efficient solution for the essential matrix relating two images, requiring only three point correspondences in the minimal case. We apply this solver in a structure-from-motion pipeline that aggregates pairwise relations by rotation averaging followed by bundle adjustment with an inverse depth parameterization. Our methods enable scene modeling with an outward-facing camera and object scanning with an inward-facing camera.
A particular problem of interest in geometric computer vision is inferring the essential matrix relating two images from point correspondences, especially from a minimal set of correspondences @cite_14 . Minimal solutions are useful for application in a random sample consensus (RANSAC) @cite_4 loop to robustly estimate the motion parameters and separate inliers from outliers. @cite_26 derived an efficient minimal solution from five point correspondences and et al @cite_6 later improved the accuracy of this method. In this work, we derive a solution for the essential matrix from at least three correspondences which applies when the camera undergoes spherical motion.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_6", "@cite_26" ], "mid": [ "1598123022", "2085261163", "2150495450", "" ], "abstract": [ "A simple algorithm for computing the three-dimensional structure of a scene from a correlated pair of perspective projections is described here, when the spatial relationship between the two projections is unknown. This problem is relevant not only to photographic surveying1 but also to binocular vision2, where the non-visual information available to the observer about the orientation and focal length of each eye is much less accurate than the optical information supplied by the retinal images themselves. The problem also arises in monocular perception of motion3, where the two projections represent views which are separated in time as well as space. As Marr and Poggio4 have noted, the fusing of two images to produce a three-dimensional percept involves two distinct processes: the establishment of a 1:1 correspondence between image points in the two views—the ‘correspondence problem’—and the use of the associated disparities for determining the distances of visible elements in the scene. I shall assume that the correspondence problem has been solved; the problem of reconstructing the scene then reduces to that of finding the relative orientation of the two viewpoints.", "A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing", "Abstract This paper presents a novel version of the five-point relative orientation algorithm given in Nister [Nister, D., 2004. An efficient solution to the five-point relative pose problem, IEEE Transactions on Pattern Analysis and Machine Intelligence, 26 (6), 756–770]. The name of the algorithm arises from the fact that it can operate even on the minimal five-point correspondences required for a finite number of solutions to relative orientation. For the minimal five correspondences, the algorithm returns up to 10 real solutions. The algorithm can also operate on many points. Like the previous version of the five-point algorithm, our method can operate correctly even in the face of critical surfaces, including planar and ruled quadric scenes. The paper presents comparisons with other direct methods, including the previously developed five-point method, two different six-point methods, the seven-point method, and the eight-point method. It is shown that the five-point method is superior in most cases among the direct methods. The new version of the algorithm was developed from the perspective of algebraic geometry and is presented in the context of computing a Grobner basis. The constraints are formulated in terms of polynomial equations in the entries of the fundamental matrix. The polynomial equations generate an algebraic ideal for which a Grobner basis is computed. The Grobner basis is used to compute the action matrix for multiplication by a single variable monomial. The eigenvectors of the action matrix give the solutions for all the variables and thereby also relative orientation. Using a Grobner basis makes the solution clear and easy to explain.", "" ] }
1604.00409
2950215975
We describe a special case of structure from motion where the camera rotates on a sphere. The camera's optical axis lies perpendicular to the sphere's surface. In this case, the camera's pose is minimally represented by three rotation parameters. From analysis of the epipolar geometry we derive a novel and efficient solution for the essential matrix relating two images, requiring only three point correspondences in the minimal case. We apply this solver in a structure-from-motion pipeline that aggregates pairwise relations by rotation averaging followed by bundle adjustment with an inverse depth parameterization. Our methods enable scene modeling with an outward-facing camera and object scanning with an inward-facing camera.
Also closely related are the works by Peleg and Ben-Ezra @cite_28 and Shum and Szeliski @cite_11 on stereo or multi-perspective panoramas. In these works, an outward-facing camera is spun on a circular path and images are captured at regular intervals. They demonstrated that by careful sampling of the images, stereo cylindrical panoramas can be created and used for either surround-view stereo viewing or 3D stereo reconstruction. These works use either controlled capture on a turntable @cite_11 or manifold mosaicing @cite_12 to obtain the positions of the images in the sequence, whereas we develop an automatic, accurate structure-from-motion pipeline which applies to both circular and spherical motion image sequences.
{ "cite_N": [ "@cite_28", "@cite_12", "@cite_11" ], "mid": [ "2128808494", "2112398571", "2175306621" ], "abstract": [ "Full panoramic images, covering 360 degrees, can be created either by using panoramic cameras or by mosaicing together many regular images. Creating panoramic views in stereo, where one panorama is generated for the left eye, and another panorama is generated for the right eye is more problematic. Earlier attempts to mosaic images from a rotating pair of stereo cameras faced severe problems of parallax and of scale changes. A new family of multiple viewpoint image projections, the Circular Projections, is developed. Two panoramic images taken using such projections can serve as a panoramic stereo pair. A system is described to generates a stereo panoramic image using circular projections from images or video taken by a single rotating camera. The system works in real-time on a PC. It should be noted that the stereo images are created without computation of 3D structure, and the depth effects are created only in the viewer's brain.", "As the field of view of a picture is much smaller than our own visual field of view, it is common to paste together several pictures to create a panoramic mosaic having a larger field of view. Images with a wider field of view can be generated by using fish-eye lens, or panoramic mosaics can be created by special devices which rotate around the camera's optical center (Quicktime VR, Surround Video), or by aligning, and pasting, frames in a video sequence to a single reference frame. Existing mosaicing methods have strong limitations on imaging conditions, and distortions are common. Manifold projection enables the creation of panoramic mosaics from video sequences under more general conditions, and in particular the unrestricted motion of a hand-held camera. The panoramic mosaic is a projection of the scene into a virtual manifold whose structure depends on the camera's motion. This manifold is more general than the customary projections onto a single image plane or onto a cylinder. In addition to being more general than traditional mosaics, manifold projection is also computationally efficient, as the only image deformations used are image plane translations and rotations. Real-time, software only, implementation on a Pentium-PC, proves the superior quality and speed of this approach.", "A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of single-perspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation." ] }
1604.00400
2335873805
Evaluation of text summarization approaches have been mostly based on metrics that measure similarities of system generated summaries with a set of human written gold-standard summaries. The most widely used metric in summarization evaluation has been the ROUGE family. ROUGE solely relies on lexical overlaps between the terms and phrases in the sentences; therefore, in cases of terminology variations and paraphrasing, ROUGE is not as effective. Scientific article summarization is one such case that is different from general domain summarization (e.g. newswire data). We provide an extensive analysis of ROUGE's effectiveness as an evaluation metric for scientific summarization; we show that, contrary to the common belief, ROUGE is not much reliable in evaluating scientific summaries. We furthermore show how different variants of ROUGE result in very different correlations with the manual Pyramid scores. Finally, we propose an alternative metric for summarization evaluation which is based on the content relevance between a system generated summary and the corresponding human written summaries. We call our metric SERA (Summarization Evaluation by Relevance Analysis). Unlike ROUGE, SERA consistently achieves high correlations with manual scores which shows its effectiveness in evaluation of scientific article summarization.
@cite_3 assesses the content quality of a candidate summary with respect to a set of human gold summaries based on their lexical overlaps. consists of several variants. Since its introduction, has been one of the most widely reported metrics in the summarization literature, and its high adoption has been due to its high correlation with human assessment scores in DUC datasets @cite_3 . However, later research has casted doubts about the accuracy of against manual evaluations. analyzed DUC 2005 to 2007 data and showed that while some systems achieve high scores with respect to human summaries, the linguistic and responsiveness scores of those systems do not correspond to the high scores.
{ "cite_N": [ "@cite_3" ], "mid": [ "2154652894" ], "abstract": [ "ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation. It includes measures to automatically determine the quality of a summary by comparing it to other (ideal) summaries created by humans. The measures count the number of overlapping units such as n-gram, word sequences, and word pairs between the computer-generated summary to be evaluated and the ideal summaries created by humans. This paper introduces four different ROUGE measures: ROUGE-N, ROUGE-L, ROUGE-W, and ROUGE-S included in the ROUGE summarization evaluation package and their evaluations. Three of them have been used in the Document Understanding Conference (DUC) 2004, a large-scale summarization evaluation sponsored by NIST." ] }
1604.00400
2335873805
Evaluation of text summarization approaches have been mostly based on metrics that measure similarities of system generated summaries with a set of human written gold-standard summaries. The most widely used metric in summarization evaluation has been the ROUGE family. ROUGE solely relies on lexical overlaps between the terms and phrases in the sentences; therefore, in cases of terminology variations and paraphrasing, ROUGE is not as effective. Scientific article summarization is one such case that is different from general domain summarization (e.g. newswire data). We provide an extensive analysis of ROUGE's effectiveness as an evaluation metric for scientific summarization; we show that, contrary to the common belief, ROUGE is not much reliable in evaluating scientific summaries. We furthermore show how different variants of ROUGE result in very different correlations with the manual Pyramid scores. Finally, we propose an alternative metric for summarization evaluation which is based on the content relevance between a system generated summary and the corresponding human written summaries. We call our metric SERA (Summarization Evaluation by Relevance Analysis). Unlike ROUGE, SERA consistently achieves high correlations with manual scores which shows its effectiveness in evaluation of scientific article summarization.
Apart from the content, other aspects of summarization such as linguistic quality have been also studied. evaluated a set of models based on syntactic features, language models and entity coherences for assessing the linguistic quality of the summaries. Machine translation evaluation metrics such as have also been compared and contrasted against @cite_19 . Despite these works, when gold-standard summaries are available, is still the most common evaluation metric that is used in the summarization published research. Apart from 's initial good results on the newswire data, the availability of the software and its efficient performance have further contributed to its popularity.
{ "cite_N": [ "@cite_19" ], "mid": [ "2251023345" ], "abstract": [ "We provide an analysis of current evaluation methodologies applied to summarization metrics and identify the following areas of concern: (1) movement away from evaluation by correlation with human assessment; (2) omission of important components of human assessment from evaluations, in addition to large numbers of metric variants; (3) absence of methods of significance testing improvements over a baseline. We outline an evaluation methodology that overcomes all such challenges, providing the first method of significance testing suitable for evaluation of summarization metrics. Our evaluation reveals for the first time which metric variants significantly outperform others, optimal metric variants distinct from current recommended best variants, as well as machine translation metric BLEU to have performance on-par with ROUGE for the purpose of evaluation of summarization systems. We subsequently replicate a recent large-scale evaluation that relied on, what we now know to be, suboptimal ROUGE variants revealing distinct conclusions about the relative performance of state-of-the-art summarization systems." ] }
1604.00052
2316966455
The tensor rank decomposition problem consists of recovering the unique set of parameters representing a robustly identifiable low-rank tensor when the coordinate representation of the tensor is presented as input. A condition number for this problem measuring the sensitivity of the parameters to an infinitesimal change to the tensor is introduced and analyzed. It is demonstrated that the absolute condition number coincides with the inverse of the least singular value of Terracini's matrix. Several basic properties of this condition number are investigated.
Robust identifiability of the parameters of a tensor rank decomposition was investigated by Bhaskara, Charikar, and Vijayaraghavan @cite_54 . It is a necessary technical requirement for the definition of the condition number. In addition, it is shown in @cite_54 how these robustness results relate to the accuracy with which the parameters of a tensor rank decompositions, i.e., the factor matrices, can be identified.
{ "cite_N": [ "@cite_54" ], "mid": [ "1782132064" ], "abstract": [ "We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: we prove that given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error. Kruskal's theorem has found many applications in proving the identifiability of parameters for various latent variable models and mixture models such as Hidden Markov models, topic models etc. Our robust version immediately implies identifiability using only polynomially many samples in many of these settings. This polynomial identifiability is an essential first step towards efficient learning algorithms for these models. Recently, algorithms based on tensor decompositions have been used to estimate the parameters of various hidden variable models efficiently in special cases as long as they satisfy certain \"non-degeneracy\" properties. Our methods give a way to go beyond this non-degeneracy barrier, and establish polynomial identifiability of the parameters under much milder conditions. Given the importance of Kruskal's theorem in the tensor literature, we expect that this robust version will have several applications beyond the settings we explore in this work." ] }
1604.00052
2316966455
The tensor rank decomposition problem consists of recovering the unique set of parameters representing a robustly identifiable low-rank tensor when the coordinate representation of the tensor is presented as input. A condition number for this problem measuring the sensitivity of the parameters to an infinitesimal change to the tensor is introduced and analyzed. It is demonstrated that the absolute condition number coincides with the inverse of the least singular value of Terracini's matrix. Several basic properties of this condition number are investigated.
Perhaps the result closest in spirit to this work are the Cram 'er--Rao bounds (CRB) for the tensor rank decomposition problem that have been investigated in @cite_78 @cite_2 @cite_67 @cite_75 . This CRB measures the stability of a tensor rank decomposition in a statistical framework, wherein additive Gaussian noise is assumed to corrupt the factor matrices. The quantity of interest is the squared angular error between the true parameters and the estimated parameters @cite_75 . The Cram 'er--Rao lower bound (CRLB) for estimating the parameters @math of the tensor rank decomposition of @math is then defined @cite_75 as the inverse of the Fisher information matrix @math , where @math is as in thm_informal_theorem and @math is the variance of the Gaussian noise. However, this matrix is not invertible, so in @cite_75 it is suggested to alter the tensor rank decomposition problem by removing some of the parameters. This implies that the CRLB depends on the particular elimination of the variables that is chosen; hence, it is not an measure of stability of the tensor rank decomposition problem as defined in this paper.
{ "cite_N": [ "@cite_67", "@cite_75", "@cite_78", "@cite_2" ], "mid": [ "2032665768", "", "2138996475", "2125305748" ], "abstract": [ "In this paper, a novel algorithm to blindly separate an instantaneous linear underdetermined mixture of nonstationary sources is proposed. It means that the number of sources exceeds the number of channels of the available data. The separation is based on the working assumption that the sources are piecewise stationary with a different variance in each block. It proceeds in two steps: 1) estimating the mixing matrix, and 2) computing the optimum beamformer in each block to maximize the signal-to-interference ratio of each separated signal with respect to the remaining signals. Estimating the mixing matrix is accomplished through a specialized tensor decomposition of the set of sample covariance matrices of the received mixture in each block. It utilizes optimum weighting, which allows statistically efficient (CRB attaining) estimation provided that the data obey the assumed Gaussian piecewise stationary model. In simulations, performance of the algorithm is successfully tested on blind separation of 16 speech signals from nine linear instantaneous mixtures of these signals.", "", "Unlike low-rank matrix decomposition, which is generically nonunique for rank greater than one, low-rank three-and higher dimensional array decomposition is unique, provided that the array rank is lower than a certain bound, and the correct number of components (equal to array rank) is sought in the decomposition. Parallel factor (PARAFAC) analysis is a common name for low-rank decomposition of higher dimensional arrays. This paper develops Cramer-Rao bound (CRB) results for low-rank decomposition of three- and four-dimensional (3-D and 4-D) arrays, illustrates the behavior of the resulting bounds, and compares alternating least squares algorithms that are commonly used to compute such decompositions with the respective CRBs. Simple-to-check necessary conditions for a unique low-rank decomposition are also provided.", "INDSCAL is a special case of the CANDECOMP-PARAFAC (CP) decomposition of three or more-way tensors, where two factor matrices are equal. This paper provides a stability analysis of INDSCAL that is done by deriving the Cramer-Rao lower bound (CRLB) on variance of an unbiased estimate of the tensor parameters from its noisy observation (the tensor plus an i.i.d. Gaussian random tensor). The existence of the bound reveals necessary conditions for the essential uniqueness of the INDSCAL decomposition. This is compared with previous results on CP. Next, analytical expressions for the inverse of the Hessian matrix, which is needed to compute the CRLB, are used in a damped Gaussian (Levenberg-Marquardt) algorithm, which gives a novel method for INDSCAL having a lower computational complexity." ] }
1604.00273
2213889503
In network management, when it comes to security breaches, human error constitutes a dominant factor. We present our tool topoS which automatically synthesizes low-level network configurations from high-level security goals. The automation and a feedback loop help to prevent human errors. Except for a last serialization step, topoS is formally verified with Isabelle HOL, which prevents implementation errors. In a case study, we demonstrate topoS by example. For the first time, the complete transition from high-level security goals to both firewall and SDN configurations is presented.
[0] Fireman @cite_20 [0] Fireman @cite_20 ; HSA @cite_8 ; Anteater @cite_13 ; Config -Checker @cite_19 ; VeriFlow @cite_28 [0] Xie @cite_16 ; Lopes @cite_21 [0] HSA @cite_8 ; Anteater @cite_13 ; Config -Checker @cite_19 [0] one big ; switch @cite_23 ; Firmato @cite_5 ; FLIP @cite_2 ; FortNOX @cite_22 ; Merlin @cite_24 ; Kinetic @cite_25 ; [0] step [0] Firmato @cite_5 ; FLIP @cite_2 ; NetKAT @cite_12 ; step + [0] rcp @cite_29 ; OpenFlow @cite_17 ; Merlin @cite_24 ; optimized one ; big switch @cite_6 ; NetKAT @cite_12 ; VeriFlow @cite_28 ; [0] Iptables Semantics @cite_18
{ "cite_N": [ "@cite_12", "@cite_18", "@cite_22", "@cite_8", "@cite_28", "@cite_29", "@cite_21", "@cite_6", "@cite_24", "@cite_19", "@cite_23", "@cite_2", "@cite_5", "@cite_16", "@cite_13", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2067738619", "614847882", "2137845741", "1882012874", "2122695394", "2095234341", "2472616939", "2106863923", "2021234005", "2138556012", "", "2157690960", "1860215963", "2140069682", "2115526539", "2163593754", "2120255160", "2147118406" ], "abstract": [ "High-level programming languages play a key role in a growing number of networking platforms, streamlining application development and enabling precise formal reasoning about network behavior. Unfortunately, current compilers only handle \"local\" programs that specify behavior in terms of hop-by-hop forwarding behavior, or modest extensions such as simple paths. To encode richer \"global\" behaviors, programmers must add extra state -- something that is tricky to get right and makes programs harder to write and maintain. Making matters worse, existing compilers can take tens of minutes to generate the forwarding state for the network, even on relatively small inputs. This forces programmers to waste time working around performance issues or even revert to using hardware-level APIs. This paper presents a new compiler for the NetKAT language that handles rich features including regular paths and virtual networks, and yet is several orders of magnitude faster than previous compilers. The compiler uses symbolic automata to calculate the extra state needed to implement \"global\" programs, and an intermediate representation based on binary decision diagrams to dramatically improve performance. We describe the design and implementation of three essential compiler stages: from virtual programs (which specify behavior in terms of virtual topologies) to global programs (which specify network-wide behavior in terms of physical topologies), from global programs to local programs (which specify behavior in terms of single-switch behavior), and from local programs to hardware-level forwarding tables. We present results from experiments on real-world benchmarks that quantify performance in terms of compilation time and forwarding table size.", "The security provided by a firewall for a computer network almost completely depends on the rules it enforces. For over a decade, it has been a well-known and unsolved problem that the quality of many firewall rule sets is insufficient. Therefore, there are many tools to analyze them. However, we found that none of the available tools could handle typical, real-world iptables rulesets. This is due to the complex chain model used by iptables, but also to the vast amount of possible match conditions that occur in real-world firewalls, many of which are not understood by academic and open source tools.", "Software-defined networks facilitate rapid and open innovation at the network control layer by providing a programmable network infrastructure for computing flow policies on demand. However, the dynamism of programmable networks also introduces new security challenges that demand innovative solutions. A critical challenge is efficient detection and reconciliation of potentially conflicting flow rules imposed by dynamic OpenFlow (OF) applications. To that end, we introduce FortNOX, a software extension that provides role-based authorization and security constraint enforcement for the NOX OpenFlow controller. FortNOX enables NOX to check flow rule contradictions in real time, and implements a novel analysis algorithm that is robust even in cases where an adversarial OF application attempts to strategically insert flow rules that would otherwise circumvent flow rules imposed by OF security applications. We demonstrate the utility of FortNOX through a prototype implementation and use it to examine performance and efficiency aspects of the proposed framework.", "Today's networks typically carry or deploy dozens of protocols and mechanisms simultaneously such as MPLS, NAT, ACLs and route redistribution. Even when individual protocols function correctly, failures can arise from the complex interactions of their aggregate, requiring network administrators to be masters of detail. Our goal is to automatically find an important class of failures, regardless of the protocols running, for both operational and experimental networks. To this end we developed a general and protocol-agnostic framework, called Header Space Analysis (HSA). Our formalism allows us to statically check network specifications and configurations to identify an important class of failures such as Reachability Failures, Forwarding Loops and Traffic Isolation and Leakage problems. In HSA, protocol header fields are not first class entities; instead we look at the entire packet header as a concatenation of bits without any associated meaning. Each packet is a point in the 0,1 L space where L is the maximum length of a packet header, and networking boxes transform packets from one point in the space to another point or set of points (multicast). We created a library of tools, called Hassel, to implement our framework, and used it to analyze a variety of networks and protocols. Hassel was used to analyze the Stanford University backbone network, and found all the forwarding loops in less than 10 minutes, and verified reachability constraints between two subnets in 13 seconds. It also found a large and complex loop in an experimental loose source routing protocol in 4 minutes.", "Networks are complex and prone to bugs. Existing tools that check configuration files and data-plane state operate offline at timescales of seconds to hours, and cannot detect or prevent bugs as they arise. Is it possible to check network-wide invariants in real time, as the network state evolves? The key challenge here is to achieve extremely low latency during the checks so that network performance is not affected. In this paper, we present a preliminary design, VeriFlow, which suggests that this goal is achievable. VeriFlow is a layer between a software-defined networking controller and network devices that checks for network-wide invariant violations dynamically as each forwarding rule is inserted. Based on an implementation using a Mininet OpenFlow network and Route Views trace data, we find that VeriFlow can perform rigorous checking within hundreds of microseconds per rule insertion.", "The routers in an Autonomous System (AS) must distribute the information they learn about how to reach external destinations. Unfortunately, today's internal Border Gateway Protocol (iBGP) architectures have serious problems: a \"full mesh\" iBGP configuration does not scale to large networks and \"route reflection\" can introduce problems such as protocol oscillations and persistent loops. Instead, we argue that a Routing Control Platform (RCP) should collect information about external destinations and internal topology and select the BGP routes for each router in an AS. RCP is a logically-centralized platform, separate from the IP forwarding plane, that performs route selection on behalf of routers and communicates selected routes to the routers using the unmodified iBGP protocol. RCP provides scalability without sacrificing correctness. In this paper, we present the design and implementation of an RCP prototype on commodity hardware. Using traces of BGP and internal routing data from a Tier-1 backbone, we demonstrate that RCP is fast and reliable enough to drive the BGP routing decisions for a large network. We show that RCP assigns routes correctly, even when the functionality is replicated and distributed, and that networks using RCP can expect comparable convergence delays to those using today's iBGP architectures.", "The fastest tools for network reachability queries use adhoc algorithms to compute all packets from a source S that can reach a destination D. This paper examines whether network reachability can be solved efficiently using existing verification tools. While most verification tools only compute reachability (“Can S reach D?”), we efficiently generalize them to compute all reachable packets. Using new and old benchmarks, we compare model checkers, SAT solvers and various Datalog implementations. The only existing verification method that worked competitively on all benchmarks in seconds was Datalog with a new composite Filter-Project operator and a Difference of Cubes representation. While Datalog is slightly slower than the Hassel C tool, it is far more flexible. We also present new results that more precisely characterize the computational complexity of network verification. This paper also provides a gentle introduction to program verification for the networking community.", "Software Defined Networks (SDNs) support diverse network policies by offering direct, network-wide control over how switches handle traffic. Unfortunately, many controller platforms force applications to grapple simultaneously with end-to-end connectivity constraints, routing policy, switch memory limits, and the hop-by-hop interactions between forwarding rules. We believe solutions to this complex problem should be factored in to three distinct parts: (1) high-level SDN applications should define their end-point connectivity policy on top of a \"one big switch\" abstraction; (2) a mid-level SDN infrastructure layer should decide on the hop-by-hop routing policy; and (3) a compiler should synthesize an effective set of forwarding rules that obey the user-defined policies and adhere to the resource constraints of the underlying hardware. In this paper, we define and implement our proposed architecture, present efficient rule-placement algorithms that distribute forwarding policies across general SDN networks while managing rule-space constraints, and show how to support dynamic, incremental update of policies. We evaluate the effectiveness of our algorithms analytically by providing complexity bounds on their running time and rule space, as well as empirically, using both synthetic benchmarks, and real-world firewall and routing policies.", "This paper presents Merlin, a new framework for managing resources in software-defined networks. With Merlin, administrators express high-level policies using programs in a declarative language. The language includes logical predicates to identify sets of packets, regular expressions to encode forwarding paths, and arithmetic formulas to specify bandwidth constraints. The Merlin compiler maps these policies into a constraint problem that determines bandwidth allocations using parameterizable heuristics. It then generates code that can be executed on the network elements to enforce the policies. To allow network tenants to dynamically adapt policies to their needs, Merlin provides mechanisms for delegating control of sub-policies and for verifying that modifications made to sub-policies do not violate global constraints. Experiments demonstrate the expressiveness and effectiveness of Merlin on real-world topologies and applications. Overall, Merlin simplifies network administration by providing high-level abstractions for specifying network policies that provision network resources.", "Recent studies show that configurations of network access control is one of the most complex and error prone network management tasks. For this reason, network misconfiguration becomes the main source for network unreachablility and vulnerability problems. In this paper, we present a novel approach that models the global end-to-end behavior of access control configurations of the entire network including routers, IPSec, firewalls, and NAT for unicast and multicast packets. Our model represents the network as a state machine where the packet header and location determines the state. The transitions in this model are determined by packet header information, packet location, and policy semantics for the devices being modeled. We encode the semantics of access control policies with Boolean functions using binary decision diagrams (BDDs). We then use computation tree logic (CTL) and symbolic model checking to investigate all future and past states of this packet in the network and verify network reachability and security requirements. Thus, our contributions in this work is the global encoding for network configurations that allows for general reachability and security property-based verification using CTL model checking. We have implemented our approach in a tool called ConfigChecker. While evaluating ConfigChecker, we modeled and verified network configurations with thousands of devices and millions of configuration rules, thus demonstrating the scalability of this approach.", "", "Multiple firewalls typically cooperate to provide security properties for a network, despite the fact that these firewalls are often spatially distributed and configured in isolation. Without a global view of the network configuration, such a system is ripe for misconfiguration, causing conflicts and major security vulnerabilities. We propose FLIP, a high-level firewall configuration policy language for traffic access control, to enforce security and ensure seamless configuration management. In FLIP, firewall security policies are defined as high-level service-oriented goals, which can be translated automatically into access control rules to be distributed to appropriate enforcement devices. FLIP guarantees that the rules generated will be conflict-free, both on individual firewall and between firewalls. We prove that the translation algorithm is both sound and complete. FLIP supports policy inheritance and customization features that enable defining a global firewall policy for large-scale enterprise network quickly and accurately. Through a case study, we argue that firewall policy management for large-scale networks is efficient and accurate using FLIP.", "In recent years, packet filtering firewalls have seen some impressive technological advances (e.g., stateful inspection, transparency, performance, etc.) and widespread deployment. In contrast, firewall and security management technology is lacking. We present Firmato, a firewall management toolkit, with the following distinguishing properties and components: (1) an entity relationship model containing, in a unified form, global knowledge of the security policy and of the network topology; (2) a model definition language, which we use as an interface to define an instance of the entity relationship model; (3) a model compiler translating the global knowledge of the model into firewall-specific configuration files; and (4) a graphical firewall rule illustrator. We demonstrate Firmato's capabilities on a realistic example, thus showing that firewall management can be done successfully at an appropriate level of abstraction. We implemented our toolkit to work with a commercially available firewall product. We believe that our approach is an important step towards streamlining the process of configuring and managing firewalls, especially in complex, multi firewall installations.", "The primary purpose of a network is to provide reachability between applications running on end hosts. In this paper, we describe how to compute the reachability a network provides from a snapshot of the configuration state from each of the routers. Our primary contribution is the precise definition of the potential reachability of a network and a substantial simplification of the problem through a unified modeling of packet filters and routing protocols. In the end, we reduce a complex, important practical problem to computing the transitive closure to set union and intersection operations on reachability set representations. We then extend our algorithm to model the influence of packet transformations (e.g., by NATs or ToS remapping) along the path. Our technique for static analysis of network reachability is valuable for verifying the intent of the network designer, troubleshooting reachability problems, and performing \"what-if\" analysis of failure scenarios.", "Diagnosing problems in networks is a time-consuming and error-prone process. Existing tools to assist operators primarily focus on analyzing control plane configuration. Configuration analysis is limited in that it cannot find bugs in router software, and is harder to generalize across protocols since it must model complex configuration languages and dynamic protocol behavior. This paper studies an alternate approach: diagnosing problems through static analysis of the data plane. This approach can catch bugs that are invisible at the level of configuration files, and simplifies unified analysis of a network across many protocols and implementations. We present Anteater, a tool for checking invariants in the data plane. Anteater translates high-level network invariants into boolean satisfiability problems (SAT), checks them against network state using a SAT solver, and reports counterexamples if violations have been found. Applied to a large university network, Anteater revealed 23 bugs, including forwarding loops and stale ACL rules, with only five false positives. Nine of these faults are being fixed by campus network operators.", "Network conditions are dynamic; unfortunately, current approaches to configuring networks. Network operators need tools to express how a network's data-plane behavior should respond to a wide range of events and changing conditions, ranging from unexpected failures to shifting traffic patterns to planned maintenance. Yet, to update the network configuration today, operators typically rely on a combination of manual intervention and ad hoc scripts. In this paper, we present Kinetic, a domain specific language and network control system that enables operators to control their networks dynamically in a concise, intuitive way. Kinetic also automatically verifies the correctness of these control programs with respect to user-specified temporal properties. Our user study of Kinetic with several hundred network operators demonstrates that Kinetic is intuitive and usable, and our performance evaluation shows that realistic Kinetic programs scale well with the number of policies and the size of the network.", "Security concerns are becoming increasingly critical in networked systems. Firewalls provide important defense for network security. However, misconfigurations in firewalls are very common and significantly weaken the desired security. This paper introduces FIREMAN, a static analysis toolkit for firewall modeling and analysis. By treating firewall configurations as specialized programs, FIREMAN applies static analysis techniques to check misconfigurations, such as policy violations, inconsistencies, and inefficiencies, in individual firewalls as well as among distributed firewalls. FIREMAN performs symbolic model checking of the firewall configurations for all possible IP packets and along all possible data paths. It is both sound and complete because of the finite state nature of firewall configurations. FIREMAN is implemented by modeling firewall rules using binary decision diagrams (BDDs), which have been used successfully in hardware verification and model checking. We have experimented with FIREMAN and used it to uncover several real misconfigurations in enterprise networks, some of which have been subsequently confirmed and corrected by the administrators of these networks.", "This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too" ] }
1604.00273
2213889503
In network management, when it comes to security breaches, human error constitutes a dominant factor. We present our tool topoS which automatically synthesizes low-level network configurations from high-level security goals. The automation and a feedback loop help to prevent human errors. Except for a last serialization step, topoS is formally verified with Isabelle HOL, which prevents implementation errors. In a case study, we demonstrate topoS by example. For the first time, the complete transition from high-level security goals to both firewall and SDN configurations is presented.
Firmato @cite_5 is the work closest related to . It defines an entity relationship model to structure network management and compile firewall rules from it, illustrated in Fig. , . Firmato focuses on roles, which correspond to policy entities in our model. A role has positive capabilities and is related to other roles, which can be used to derive an access control matrix. Zones, Gateway-Interfaces and Gateways define the network topology, which corresponds to the interface abstraction. As illustrated in Fig. , the abstraction layers identified in this work can also be identified in Firmato's model. The Host Groups, Role Groups and Hosts definitions provide a mapping from policy entities to network entities, which is Firmato's approach to the naming problem. Similar to Firmato (with more support for negative capabilities) is FLIP @cite_2 , which is a high-level language with focus on management ( allow deny HTTP). Essentially, both FLIP and Firmato enhance the Access Control Matrix horizontally by including layer four port management and traverse it vertically by serializing to firewall rules.
{ "cite_N": [ "@cite_5", "@cite_2" ], "mid": [ "1860215963", "2157690960" ], "abstract": [ "In recent years, packet filtering firewalls have seen some impressive technological advances (e.g., stateful inspection, transparency, performance, etc.) and widespread deployment. In contrast, firewall and security management technology is lacking. We present Firmato, a firewall management toolkit, with the following distinguishing properties and components: (1) an entity relationship model containing, in a unified form, global knowledge of the security policy and of the network topology; (2) a model definition language, which we use as an interface to define an instance of the entity relationship model; (3) a model compiler translating the global knowledge of the model into firewall-specific configuration files; and (4) a graphical firewall rule illustrator. We demonstrate Firmato's capabilities on a realistic example, thus showing that firewall management can be done successfully at an appropriate level of abstraction. We implemented our toolkit to work with a commercially available firewall product. We believe that our approach is an important step towards streamlining the process of configuring and managing firewalls, especially in complex, multi firewall installations.", "Multiple firewalls typically cooperate to provide security properties for a network, despite the fact that these firewalls are often spatially distributed and configured in isolation. Without a global view of the network configuration, such a system is ripe for misconfiguration, causing conflicts and major security vulnerabilities. We propose FLIP, a high-level firewall configuration policy language for traffic access control, to enforce security and ensure seamless configuration management. In FLIP, firewall security policies are defined as high-level service-oriented goals, which can be translated automatically into access control rules to be distributed to appropriate enforcement devices. FLIP guarantees that the rules generated will be conflict-free, both on individual firewall and between firewalls. We prove that the translation algorithm is both sound and complete. FLIP supports policy inheritance and customization features that enable defining a global firewall policy for large-scale enterprise network quickly and accurately. Through a case study, we argue that firewall policy management for large-scale networks is efficient and accurate using FLIP." ] }
1604.00273
2213889503
In network management, when it comes to security breaches, human error constitutes a dominant factor. We present our tool topoS which automatically synthesizes low-level network configurations from high-level security goals. The automation and a feedback loop help to prevent human errors. Except for a last serialization step, topoS is formally verified with Isabelle HOL, which prevents implementation errors. In a case study, we demonstrate topoS by example. For the first time, the complete transition from high-level security goals to both firewall and SDN configurations is presented.
As illustrated in Fig. , Fireman @cite_20 is a counterpart to Firmato. It verifies firewall rules against a global access policy. In addition, Fireman provides verification on the same horizontal layer ( finding shadowed rules or inter-firewall conflicts, which do not affect the resulting end-to-end connectivity but are still most likely an implementation error). Abstracting to its uses, one may call rcc @cite_3 the fireman for BGP.
{ "cite_N": [ "@cite_3", "@cite_20" ], "mid": [ "1965343327", "2120255160" ], "abstract": [ "The Internet is composed of many independent autonomous systems (ASes) that exchange reachability information to destinations using the Border Gateway Protocol (BGP). Network operators in each AS configure BGP routers to control the routes that are learned, selected, and announced to other routers. Faults in BGP configuration can cause forwarding loops, packet loss, and unintended paths between hosts, each of which constitutes a failure of the Internet routing infrastructure. This paper describes the design and implementation of rcc, the router configuration checker, a tool that finds faults in BGP configurations using static analysis. rcc detects faults by checking constraints that are based on a high-level correctness specification. rcc detects two broad classes of faults: route validity faults, where routers may learn routes that do not correspond to usable paths, and path visibility faults, where routers may fail to learn routes for paths that exist in the network. rcc enables network operators to test and debug configurations before deploying them in an operational network, improving on the status quo where most faults are detected only during operation. rcc has been downloaded by more than sixty-five network operators to date, some of whom have shared their configurations with us. We analyze network-wide configurations from 17 different ASes to detect a wide variety of faults and use these findings to motivate improvements to the Internet routing infrastructure.", "Security concerns are becoming increasingly critical in networked systems. Firewalls provide important defense for network security. However, misconfigurations in firewalls are very common and significantly weaken the desired security. This paper introduces FIREMAN, a static analysis toolkit for firewall modeling and analysis. By treating firewall configurations as specialized programs, FIREMAN applies static analysis techniques to check misconfigurations, such as policy violations, inconsistencies, and inefficiencies, in individual firewalls as well as among distributed firewalls. FIREMAN performs symbolic model checking of the firewall configurations for all possible IP packets and along all possible data paths. It is both sound and complete because of the finite state nature of firewall configurations. FIREMAN is implemented by modeling firewall rules using binary decision diagrams (BDDs), which have been used successfully in hardware verification and model checking. We have experimented with FIREMAN and used it to uncover several real misconfigurations in enterprise networks, some of which have been subsequently confirmed and corrected by the administrators of these networks." ] }
1604.00273
2213889503
In network management, when it comes to security breaches, human error constitutes a dominant factor. We present our tool topoS which automatically synthesizes low-level network configurations from high-level security goals. The automation and a feedback loop help to prevent human errors. Except for a last serialization step, topoS is formally verified with Isabelle HOL, which prevents implementation errors. In a case study, we demonstrate topoS by example. For the first time, the complete transition from high-level security goals to both firewall and SDN configurations is presented.
Header Space Analysis (HSA) @cite_8 , Anteater @cite_13 , and Config -Checker @cite_19 verify several horizontal safety properties on the interface abstraction, such as absence of forwarding loops. By analyzing reachability @cite_16 @cite_21 @cite_13 @cite_19 @cite_8 , horizontal consistency of the interface abstraction with an access control matrix can also be verified. Verification of incremental changes to the interface abstraction can be done in real-time with VeriFlow @cite_28 , which can also prevent installation of violating rules. These models of the interface abstraction have many commonalities: The network boxes in all models are stateless and the network topology is a graph, connecting the entity's interfaces. A function models packet traversal at a network box. These models could be considered as a giant (extended) finite state machine (FSM), where the state of a packet is an (interface @math packet) pair and the network topology and forwarding function represent the state transition function @cite_21 @cite_30 . Anteater @cite_13 differs in that interface information is implicit and packet modification is represented by relations over packet histories.
{ "cite_N": [ "@cite_30", "@cite_8", "@cite_28", "@cite_21", "@cite_19", "@cite_16", "@cite_13" ], "mid": [ "95959045", "1882012874", "2122695394", "2472616939", "2138556012", "2140069682", "2115526539" ], "abstract": [ "Formal verification has seen much success in several domains of hardware and software design. For example, in hardware verification there has been much work in the verification of microprocessors (e.g. [1]) and memory systems (e.g. [2]). Similarly, software verification has seen success in device-drivers (e.g. [3]) and concurrent software (e.g. [4]). The area of network verification, which consists of both hardware and software components, has received relatively less attention. Traditionally, the focus in this domain has been on performance and security, with less emphasis on functional correctness. However, increasing complexity is resulting in increasing functional failures and thus prompting interest in verification of key correctness properties. This paper reviews the formal verification techniques that have been used here thus far, with the goal of understanding the characteristics of the problem domain that are helpful for each of the techniques, as well as those that pose specific challenges. Finally, it highlights some interesting research challenges that need to be addressed in this important emerging domain.", "Today's networks typically carry or deploy dozens of protocols and mechanisms simultaneously such as MPLS, NAT, ACLs and route redistribution. Even when individual protocols function correctly, failures can arise from the complex interactions of their aggregate, requiring network administrators to be masters of detail. Our goal is to automatically find an important class of failures, regardless of the protocols running, for both operational and experimental networks. To this end we developed a general and protocol-agnostic framework, called Header Space Analysis (HSA). Our formalism allows us to statically check network specifications and configurations to identify an important class of failures such as Reachability Failures, Forwarding Loops and Traffic Isolation and Leakage problems. In HSA, protocol header fields are not first class entities; instead we look at the entire packet header as a concatenation of bits without any associated meaning. Each packet is a point in the 0,1 L space where L is the maximum length of a packet header, and networking boxes transform packets from one point in the space to another point or set of points (multicast). We created a library of tools, called Hassel, to implement our framework, and used it to analyze a variety of networks and protocols. Hassel was used to analyze the Stanford University backbone network, and found all the forwarding loops in less than 10 minutes, and verified reachability constraints between two subnets in 13 seconds. It also found a large and complex loop in an experimental loose source routing protocol in 4 minutes.", "Networks are complex and prone to bugs. Existing tools that check configuration files and data-plane state operate offline at timescales of seconds to hours, and cannot detect or prevent bugs as they arise. Is it possible to check network-wide invariants in real time, as the network state evolves? The key challenge here is to achieve extremely low latency during the checks so that network performance is not affected. In this paper, we present a preliminary design, VeriFlow, which suggests that this goal is achievable. VeriFlow is a layer between a software-defined networking controller and network devices that checks for network-wide invariant violations dynamically as each forwarding rule is inserted. Based on an implementation using a Mininet OpenFlow network and Route Views trace data, we find that VeriFlow can perform rigorous checking within hundreds of microseconds per rule insertion.", "The fastest tools for network reachability queries use adhoc algorithms to compute all packets from a source S that can reach a destination D. This paper examines whether network reachability can be solved efficiently using existing verification tools. While most verification tools only compute reachability (“Can S reach D?”), we efficiently generalize them to compute all reachable packets. Using new and old benchmarks, we compare model checkers, SAT solvers and various Datalog implementations. The only existing verification method that worked competitively on all benchmarks in seconds was Datalog with a new composite Filter-Project operator and a Difference of Cubes representation. While Datalog is slightly slower than the Hassel C tool, it is far more flexible. We also present new results that more precisely characterize the computational complexity of network verification. This paper also provides a gentle introduction to program verification for the networking community.", "Recent studies show that configurations of network access control is one of the most complex and error prone network management tasks. For this reason, network misconfiguration becomes the main source for network unreachablility and vulnerability problems. In this paper, we present a novel approach that models the global end-to-end behavior of access control configurations of the entire network including routers, IPSec, firewalls, and NAT for unicast and multicast packets. Our model represents the network as a state machine where the packet header and location determines the state. The transitions in this model are determined by packet header information, packet location, and policy semantics for the devices being modeled. We encode the semantics of access control policies with Boolean functions using binary decision diagrams (BDDs). We then use computation tree logic (CTL) and symbolic model checking to investigate all future and past states of this packet in the network and verify network reachability and security requirements. Thus, our contributions in this work is the global encoding for network configurations that allows for general reachability and security property-based verification using CTL model checking. We have implemented our approach in a tool called ConfigChecker. While evaluating ConfigChecker, we modeled and verified network configurations with thousands of devices and millions of configuration rules, thus demonstrating the scalability of this approach.", "The primary purpose of a network is to provide reachability between applications running on end hosts. In this paper, we describe how to compute the reachability a network provides from a snapshot of the configuration state from each of the routers. Our primary contribution is the precise definition of the potential reachability of a network and a substantial simplification of the problem through a unified modeling of packet filters and routing protocols. In the end, we reduce a complex, important practical problem to computing the transitive closure to set union and intersection operations on reachability set representations. We then extend our algorithm to model the influence of packet transformations (e.g., by NATs or ToS remapping) along the path. Our technique for static analysis of network reachability is valuable for verifying the intent of the network designer, troubleshooting reachability problems, and performing \"what-if\" analysis of failure scenarios.", "Diagnosing problems in networks is a time-consuming and error-prone process. Existing tools to assist operators primarily focus on analyzing control plane configuration. Configuration analysis is limited in that it cannot find bugs in router software, and is harder to generalize across protocols since it must model complex configuration languages and dynamic protocol behavior. This paper studies an alternate approach: diagnosing problems through static analysis of the data plane. This approach can catch bugs that are invisible at the level of configuration files, and simplifies unified analysis of a network across many protocols and implementations. We present Anteater, a tool for checking invariants in the data plane. Anteater translates high-level network invariants into boolean satisfiability problems (SAT), checks them against network state using a SAT solver, and reports counterexamples if violations have been found. Applied to a large university network, Anteater revealed 23 bugs, including forwarding loops and stale ACL rules, with only five false positives. Nine of these faults are being fixed by campus network operators." ] }
1604.00273
2213889503
In network management, when it comes to security breaches, human error constitutes a dominant factor. We present our tool topoS which automatically synthesizes low-level network configurations from high-level security goals. The automation and a feedback loop help to prevent human errors. Except for a last serialization step, topoS is formally verified with Isabelle HOL, which prevents implementation errors. In a case study, we demonstrate topoS by example. For the first time, the complete transition from high-level security goals to both firewall and SDN configurations is presented.
Most analysis tools make simplifying assumptions about the underlying network boxes. Diekmann al @cite_18 present simplification of iptables firewalls to make complex real-world firewalls available for tools with simplifying assumptions.
{ "cite_N": [ "@cite_18" ], "mid": [ "614847882" ], "abstract": [ "The security provided by a firewall for a computer network almost completely depends on the rules it enforces. For over a decade, it has been a well-known and unsolved problem that the quality of many firewall rule sets is insufficient. Therefore, there are many tools to analyze them. However, we found that none of the available tools could handle typical, real-world iptables rulesets. This is due to the complex chain model used by iptables, but also to the vast amount of possible match conditions that occur in real-world firewalls, many of which are not understood by academic and open source tools." ] }
1604.00273
2213889503
In network management, when it comes to security breaches, human error constitutes a dominant factor. We present our tool topoS which automatically synthesizes low-level network configurations from high-level security goals. The automation and a feedback loop help to prevent human errors. Except for a last serialization step, topoS is formally verified with Isabelle HOL, which prevents implementation errors. In a case study, we demonstrate topoS by example. For the first time, the complete transition from high-level security goals to both firewall and SDN configurations is presented.
NetKAT @cite_12 is a SDN programming language with well-defined semantics. It features an efficient compiler for local, global, and virtual programs to flow table entries.
{ "cite_N": [ "@cite_12" ], "mid": [ "2067738619" ], "abstract": [ "High-level programming languages play a key role in a growing number of networking platforms, streamlining application development and enabling precise formal reasoning about network behavior. Unfortunately, current compilers only handle \"local\" programs that specify behavior in terms of hop-by-hop forwarding behavior, or modest extensions such as simple paths. To encode richer \"global\" behaviors, programmers must add extra state -- something that is tricky to get right and makes programs harder to write and maintain. Making matters worse, existing compilers can take tens of minutes to generate the forwarding state for the network, even on relatively small inputs. This forces programmers to waste time working around performance issues or even revert to using hardware-level APIs. This paper presents a new compiler for the NetKAT language that handles rich features including regular paths and virtual networks, and yet is several orders of magnitude faster than previous compilers. The compiler uses symbolic automata to calculate the extra state needed to implement \"global\" programs, and an intermediate representation based on binary decision diagrams to dramatically improve performance. We describe the design and implementation of three essential compiler stages: from virtual programs (which specify behavior in terms of virtual topologies) to global programs (which specify network-wide behavior in terms of physical topologies), from global programs to local programs (which specify behavior in terms of single-switch behavior), and from local programs to hardware-level forwarding tables. We present results from experiments on real-world benchmarks that quantify performance in terms of compilation time and forwarding table size." ] }
1604.00273
2213889503
In network management, when it comes to security breaches, human error constitutes a dominant factor. We present our tool topoS which automatically synthesizes low-level network configurations from high-level security goals. The automation and a feedback loop help to prevent human errors. Except for a last serialization step, topoS is formally verified with Isabelle HOL, which prevents implementation errors. In a case study, we demonstrate topoS by example. For the first time, the complete transition from high-level security goals to both firewall and SDN configurations is presented.
Craven al @cite_0 present a generalized (not network-specific) process to translate access control policies, enhanced with several aspects, to enforceable device-specific policies; the implementation requires a model repository of box semantics and their interplay. Pahl delivers a data-centric, network-specific approach for managing and implementing such a repository, further focusing on things @cite_9 . FortNOX @cite_22 horizontally enhances the access control abstraction as it assures that rules by security apps are not overwritten by other apps. Technically, it hooks up at the access control interface abstraction translation. Kinetic @cite_25 is an SDN language which lifts static policies (as constructed by ) to dynamic policies. To accomplish this, an administrator can define a simple FSM which dynamically (triggered by network events) switches between static policies. In addition, the FSM can be verified with a model checker. Features are horizontally added to the interface abstraction: a routing policy allows specifying of network traffic @cite_6 . Merlin @cite_24 additionally supports bandwidth assignments and network function chaining. Both translate from a global policy to local enforcement and Merlin provides a feature-rich language for interface abstraction policies.
{ "cite_N": [ "@cite_22", "@cite_9", "@cite_6", "@cite_0", "@cite_24", "@cite_25" ], "mid": [ "2137845741", "1591487020", "2106863923", "2539478360", "2021234005", "2163593754" ], "abstract": [ "Software-defined networks facilitate rapid and open innovation at the network control layer by providing a programmable network infrastructure for computing flow policies on demand. However, the dynamism of programmable networks also introduces new security challenges that demand innovative solutions. A critical challenge is efficient detection and reconciliation of potentially conflicting flow rules imposed by dynamic OpenFlow (OF) applications. To that end, we introduce FortNOX, a software extension that provides role-based authorization and security constraint enforcement for the NOX OpenFlow controller. FortNOX enables NOX to check flow rule contradictions in real time, and implements a novel analysis algorithm that is robust even in cases where an adversarial OF application attempts to strategically insert flow rules that would otherwise circumvent flow rules imposed by OF security applications. We demonstrate the utility of FortNOX through a prototype implementation and use it to examine performance and efficiency aspects of the proposed framework.", "With the Internet of Things, more and more devices become remotely manageable. The amount and heterogeneity of managed devices make the task of implementing management functionality challenging. Future Pervasive Computing scenarios require implementing a plethora of services to provide management functionality. With growing demand on services, reducing the emerging complexity becomes increasingly important. A simple-to-use programming model for implementing complex management scenarios is essential to enable developers to create the growing amount of required management software at high quality. The paper presents how data-centric mechanisms, as known from network management, can be utilized to create a service-oriented architecture (SOA) for management services. The resulting shift of complexity from access functionality towards data structures introduces new flexibility and facilitates the programming of management applications significantly. This is evaluated with a user study on the reference implementation.", "Software Defined Networks (SDNs) support diverse network policies by offering direct, network-wide control over how switches handle traffic. Unfortunately, many controller platforms force applications to grapple simultaneously with end-to-end connectivity constraints, routing policy, switch memory limits, and the hop-by-hop interactions between forwarding rules. We believe solutions to this complex problem should be factored in to three distinct parts: (1) high-level SDN applications should define their end-point connectivity policy on top of a \"one big switch\" abstraction; (2) a mid-level SDN infrastructure layer should decide on the hop-by-hop routing policy; and (3) a compiler should synthesize an effective set of forwarding rules that obey the user-defined policies and adhere to the resource constraints of the underlying hardware. In this paper, we define and implement our proposed architecture, present efficient rule-placement algorithms that distribute forwarding policies across general SDN networks while managing rule-space constraints, and show how to support dynamic, incremental update of policies. We evaluate the effectiveness of our algorithms analytically by providing complexity bounds on their running time and rule space, as well as empirically, using both synthetic benchmarks, and real-world firewall and routing policies.", "", "This paper presents Merlin, a new framework for managing resources in software-defined networks. With Merlin, administrators express high-level policies using programs in a declarative language. The language includes logical predicates to identify sets of packets, regular expressions to encode forwarding paths, and arithmetic formulas to specify bandwidth constraints. The Merlin compiler maps these policies into a constraint problem that determines bandwidth allocations using parameterizable heuristics. It then generates code that can be executed on the network elements to enforce the policies. To allow network tenants to dynamically adapt policies to their needs, Merlin provides mechanisms for delegating control of sub-policies and for verifying that modifications made to sub-policies do not violate global constraints. Experiments demonstrate the expressiveness and effectiveness of Merlin on real-world topologies and applications. Overall, Merlin simplifies network administration by providing high-level abstractions for specifying network policies that provision network resources.", "Network conditions are dynamic; unfortunately, current approaches to configuring networks. Network operators need tools to express how a network's data-plane behavior should respond to a wide range of events and changing conditions, ranging from unexpected failures to shifting traffic patterns to planned maintenance. Yet, to update the network configuration today, operators typically rely on a combination of manual intervention and ad hoc scripts. In this paper, we present Kinetic, a domain specific language and network control system that enables operators to control their networks dynamically in a concise, intuitive way. Kinetic also automatically verifies the correctness of these control programs with respect to user-specified temporal properties. Our user study of Kinetic with several hundred network operators demonstrates that Kinetic is intuitive and usable, and our performance evaluation shows that realistic Kinetic programs scale well with the number of policies and the size of the network." ] }
1603.09540
2951720633
Besides finding trends and unveiling typical patterns, modern information retrieval is increasingly more interested in the discovery of surprising information in textual datasets. In this work we focus on finding "unexpected links" in hyperlinked document corpora when documents are assigned to categories. To achieve this goal, we model the hyperlinks graph through node categories: the presence of an arc is fostered or discouraged by the categories of the head and the tail of the arc. Specifically, we determine a latent category matrix that explains common links. The matrix is built using a margin-based online learning algorithm (Passive-Aggressive), which makes us able to process graphs with @math links in less than @math minutes. We show that our method provides better accuracy than most existing text-based techniques, with higher efficiency and relying on a much smaller amount of information. It also provides higher precision than standard link prediction, especially at low recall levels; the two methods are in fact shown to be orthogonal to each other and can therefore be fruitfully combined.
One of the first papers trying to consider the problem in the context of text mining was @cite_27 . In that work, two supposedly similar web sites are compared (ideally, two web sites of two competitors). The authors first try to find a match between the pages of the two web sites, and then propose a measure of unexpectedness of a term when comparing two otherwise similar pages. All measures are based on term (or document) frequencies; unexpected links are also dealt with but in a quite simplistic manner (a link in one of the two web sites is considered unexpected'' if it is not contained in the other). Note that finding unexpected information is crucial in a number of contexts, because of the fundamental role played by serendipity in data mining (see, e.g., @cite_42 @cite_2 ).
{ "cite_N": [ "@cite_27", "@cite_42", "@cite_2" ], "mid": [ "2088750139", "2000560649", "1547244403" ], "abstract": [ "Ever since the beginning of the Web, finding useful information from the Web has been an important problem. Existing approaches include keyword-based search, wrapper-based information extraction, Web query and user preferences. These approaches essentially find information that matches the user's explicit specifications. This paper argues that this is insufficient. There is another type of information that is also of great interest, i.e., unexpected information, which is unanticipated by the user. Finding unexpected information is useful in many applications. For example, it is useful for a company to find unexpected information bout its competitors, e.g., unexpected services and products that its competitors offer. With this information, the company can learn from its competitors and or design counter measures to improve its competitiveness. Since the number of pages of a typical commercial site is very large and there are also many relevant sites (competitors), it is very difficult for a human user to view each page to discover the unexpected information. Automated assistance is needed. In this paper, we propose a number of methods to help the user find various types of unexpected information from his her competitors' Web sites. Experiment results show that these techniques are very useful in practice and also efficient.", "The idea of unsupervised learning from basic facts (axioms) or from data has fascinated researchers for decades. Knowledge discovery engines try to extract general inferences from facts or training data. Statistical methods take a more structured approach, attempting to quantify data by known and intuitively understood models. The problem of gleaning knowledge from existing data sources poses a significant paradigm shift from these traditional approaches. The size, noise, diversity, dimensionality, and distributed nature of typical data sets make even formal problem specification difficult. Moreover, you typically do not have control over data generation. This lack of control opens up a Pandora's box filled with issues such as overfitting, limited coverage, and missing incorrect data with high dimensionality. Once specified, solution techniques must deal with complexity, scalability (to meaningful data sizes), and presentation. This entire process is where data mining makes its transition from serendipity to science.", "In this paper we propose metrics unexpectedness and unexpectedness_r for measuring the serendipity of recommendation lists produced by recommender systems. Recommender systems have been evaluated in many ways. Although prediction quality is frequently measured by various accuracy metrics, recommender systems must be not only accurate but also useful. A few researchers have argued that the bottom-line measure of the success of a recommender system should be user satisfaction. The basic idea of our metrics is that unexpectedness is the distance between the results produced by the method to be evaluated and those produced by a primitive prediction method. Here, unexpectedness is a metric for a whole recommendation list, while unexpectedness_r is that taking into account the ranking in the list. From the viewpoints of both accuracy and serendipity, we evaluated the results obtained by three prediction methods in experimental studies on television program recommendations." ] }
1603.09540
2951720633
Besides finding trends and unveiling typical patterns, modern information retrieval is increasingly more interested in the discovery of surprising information in textual datasets. In this work we focus on finding "unexpected links" in hyperlinked document corpora when documents are assigned to categories. To achieve this goal, we model the hyperlinks graph through node categories: the presence of an arc is fostered or discouraged by the categories of the head and the tail of the arc. Specifically, we determine a latent category matrix that explains common links. The matrix is built using a margin-based online learning algorithm (Passive-Aggressive), which makes us able to process graphs with @math links in less than @math minutes. We show that our method provides better accuracy than most existing text-based techniques, with higher efficiency and relying on a much smaller amount of information. It also provides higher precision than standard link prediction, especially at low recall levels; the two methods are in fact shown to be orthogonal to each other and can therefore be fruitfully combined.
This unexpectedness measure is taken up in @cite_16 , where the aim is that of finding documents that are similar to a given set of samples and ordering the results based on their unexpectedness, using also the document structure to enhance the measures defined in @cite_27 . Finding outliers in web collections is also considered in @cite_35 , where again dissimilarity scores are computed based on word and @math -gram frequency.
{ "cite_N": [ "@cite_35", "@cite_27", "@cite_16" ], "mid": [ "1497928239", "2088750139", "2035725794" ], "abstract": [ "Mining outliers from large datasets is like finding needles in a haystack. Even more challenging is sifting through the dynamic, unstructured, and ever-growing web data for outliers. This paper presents HyCOQ, which is a hybrid algorithm that draws from the power of n-gram-based and word-based systems. Experimental results obtained using embedded motifs without a dictionary show significant improvement over using a domain dictionary irrespective of the type of data used (words, n-grams, or hybrid). Also, there is remarkable improvement in recall with hybrid documents compared to using raw words and n-grams without a domain dictionary.", "Ever since the beginning of the Web, finding useful information from the Web has been an important problem. Existing approaches include keyword-based search, wrapper-based information extraction, Web query and user preferences. These approaches essentially find information that matches the user's explicit specifications. This paper argues that this is insufficient. There is another type of information that is also of great interest, i.e., unexpected information, which is unanticipated by the user. Finding unexpected information is useful in many applications. For example, it is useful for a company to find unexpected information bout its competitors, e.g., unexpected services and products that its competitors offer. With this information, the company can learn from its competitors and or design counter measures to improve its competitiveness. Since the number of pages of a typical commercial site is very large and there are also many relevant sites (competitors), it is very difficult for a human user to view each page to discover the unexpected information. Automated assistance is needed. In this paper, we propose a number of methods to help the user find various types of unexpected information from his her competitors' Web sites. Experiment results show that these techniques are very useful in practice and also efficient.", "Text mining is widely used to discover frequent patterns in large corpora of documents. Hence, many classical data mining techniques, that have been proven fruitful in the context of data stored in relational databases, are now successfully used in the context of textual data. Nevertheless, there are many situations where it is more valuable to discover unexpected information rather than frequent ones. In the context of technology watch for example, we may want to discover new trends in specific markets, or discover what competitors are planning in the near future, etc. This paper is related to that context of research. We have proposed several unexpectedness measures and implemented them in a prototype, called UnexpectedMiner, that can be used by watchers, in order to discover unexpected documents in large corpora of documents (patents, datasheets, advertisements, scientific papers, etc.). UnexpectedMiner is able to take into account the structure of documents during the discovery of unexpected information. Many experiments have been performed in order to validate our measures and show the interest of our system." ] }
1603.09540
2951720633
Besides finding trends and unveiling typical patterns, modern information retrieval is increasingly more interested in the discovery of surprising information in textual datasets. In this work we focus on finding "unexpected links" in hyperlinked document corpora when documents are assigned to categories. To achieve this goal, we model the hyperlinks graph through node categories: the presence of an arc is fostered or discouraged by the categories of the head and the tail of the arc. Specifically, we determine a latent category matrix that explains common links. The matrix is built using a margin-based online learning algorithm (Passive-Aggressive), which makes us able to process graphs with @math links in less than @math minutes. We show that our method provides better accuracy than most existing text-based techniques, with higher efficiency and relying on a much smaller amount of information. It also provides higher precision than standard link prediction, especially at low recall levels; the two methods are in fact shown to be orthogonal to each other and can therefore be fruitfully combined.
Some authors approach the strictly related problem of determining lacking content (called in @cite_38 ) rather than unexpected information, using Wikipedia as knowledge base. A similar task is undertaken by @cite_3 , this time assuming the dual approach of finding content holes in Wikipedia using the web as a source of information.
{ "cite_N": [ "@cite_38", "@cite_3" ], "mid": [ "2125129578", "1983361085" ], "abstract": [ "In community-type content such as blogs and SNSs, we call the user's unawareness of information as a \"content hole\" and the search for this information as a \"content hole search.\" A content hole search differs from similarity searching and has a variety of types. In this paper, we propose different types of content holes and define each type. We also propose an analysis of dialogue related to community-type content and introduce content hole search by using Wikipedia as an example.", "With the huge amount of data on the Web, looking for desired information can be a time consuming task. Wikipedia is a very helpful tool as it is the largest most popular general reference site on the internet. Most search engines rank Wikipedia pages among the top listed results. However, because many articles on Wikipedia are manually updated by users, there are several articles that lack information and need to be upgraded. Those lacking information can sometimes be found on the web. Uprooting this information from the web will involve a time consuming process of reading, analyzing and summarizing the information for the user. In order to support the user search process and help Wikipedia contributors in the updating process of articles, we propose a method of finding valuable complementary information on the web. Experiments showed that our method was quite effective in retrieving important complementary information from the web pages." ] }
1603.09540
2951720633
Besides finding trends and unveiling typical patterns, modern information retrieval is increasingly more interested in the discovery of surprising information in textual datasets. In this work we focus on finding "unexpected links" in hyperlinked document corpora when documents are assigned to categories. To achieve this goal, we model the hyperlinks graph through node categories: the presence of an arc is fostered or discouraged by the categories of the head and the tail of the arc. Specifically, we determine a latent category matrix that explains common links. The matrix is built using a margin-based online learning algorithm (Passive-Aggressive), which makes us able to process graphs with @math links in less than @math minutes. We show that our method provides better accuracy than most existing text-based techniques, with higher efficiency and relying on a much smaller amount of information. It also provides higher precision than standard link prediction, especially at low recall levels; the two methods are in fact shown to be orthogonal to each other and can therefore be fruitfully combined.
More recently, @cite_9 considers the problem of finding unexpected related terms using Wikipedia as source, and taking into account at the same time the relation between terms and their centrality.
{ "cite_N": [ "@cite_9" ], "mid": [ "2066642028" ], "abstract": [ "Although many studies have addressed the problem of finding Web pages seeking relevant and popular information from a query, very few have focused on the discovery of unexpected information. This paper provides and evaluates methods for discovering unexpected information for a keyword query. For example, if the user inputs \"Michael Jackson,\" our system first discovers the unexpected related term \"karate\" and then returns the unexpected information \"Michael Jackson is good at karate.\" Discovering unexpected information is useful in many situations. For example, when a user is browsing a news article on the Web, unexpected information about a person associated with the article can pique the user's interest. If a user is sightseeing or driving, providing unexpected, additional information about a building or the region is also useful. Our approach collects terms related to a keyword query and evaluates the degree of unexpectedness of each related term for the query on the basis of (i) the relationships of coordinate terms of both the keyword query and related terms, and (ii) the degree of popularity of each related term. Experimental results show that considering these two factors are effective for discovering unexpected information." ] }
1603.09540
2951720633
Besides finding trends and unveiling typical patterns, modern information retrieval is increasingly more interested in the discovery of surprising information in textual datasets. In this work we focus on finding "unexpected links" in hyperlinked document corpora when documents are assigned to categories. To achieve this goal, we model the hyperlinks graph through node categories: the presence of an arc is fostered or discouraged by the categories of the head and the tail of the arc. Specifically, we determine a latent category matrix that explains common links. The matrix is built using a margin-based online learning algorithm (Passive-Aggressive), which makes us able to process graphs with @math links in less than @math minutes. We show that our method provides better accuracy than most existing text-based techniques, with higher efficiency and relying on a much smaller amount of information. It also provides higher precision than standard link prediction, especially at low recall levels; the two methods are in fact shown to be orthogonal to each other and can therefore be fruitfully combined.
An alternative way to approach the problem of finding unexpected links is by using @cite_34 : the expectedness of a link @math in a network @math is the likelihood of the creation of @math in @math . In fact, we will later show that state-of-the-art link prediction algorithms like @cite_40 are very good at evaluating the (un)expectedness of links. Nonetheless, it turns out that the signal obtained from the latent category matrix is even better and partly orthogonal to the one that comes from the graph alone, and combining the two techniques greatly improves the accuracy of both.
{ "cite_N": [ "@cite_40", "@cite_34" ], "mid": [ "2154454189", "1979104937" ], "abstract": [ "Abstract The Internet has become a rich and large repository of information about us as individuals. Anything from the links and text on a user’s homepage to the mailing lists the user subscribes to are reflections of social interactions a user has in the real world. In this paper we devise techniques and tools to mine this information in order to extract social networks and the exogenous factors underlying the networks’ structure. In an analysis of two data sets, from Stanford University and the Massachusetts Institute of Technology (MIT), we show that some factors are better indicators of social connections than others, and that these indicators vary between user populations. Our techniques provide potential applications in automatically inferring real world connections and discovering, labeling, and characterizing communities.", "Link prediction in complex networks has attracted increasing attention from both physical and computer science communities. The algorithms can be used to extract missing information, identify spurious interactions, evaluate network evolving mechanisms, and so on. This article summaries recent progress about link prediction algorithms, emphasizing on the contributions from physical perspectives and approaches, such as the random-walk-based methods and the maximum likelihood methods. We also introduce three typical applications: reconstruction of networks, evaluation of network evolving mechanism and classification of partially labeled networks. Finally, we introduce some applications and outline future challenges of link prediction algorithms." ] }
1603.09540
2951720633
Besides finding trends and unveiling typical patterns, modern information retrieval is increasingly more interested in the discovery of surprising information in textual datasets. In this work we focus on finding "unexpected links" in hyperlinked document corpora when documents are assigned to categories. To achieve this goal, we model the hyperlinks graph through node categories: the presence of an arc is fostered or discouraged by the categories of the head and the tail of the arc. Specifically, we determine a latent category matrix that explains common links. The matrix is built using a margin-based online learning algorithm (Passive-Aggressive), which makes us able to process graphs with @math links in less than @math minutes. We show that our method provides better accuracy than most existing text-based techniques, with higher efficiency and relying on a much smaller amount of information. It also provides higher precision than standard link prediction, especially at low recall levels; the two methods are in fact shown to be orthogonal to each other and can therefore be fruitfully combined.
Our basic idea -- explaining a graph according to some feature its nodes exhibit -- is not new. It has been proposed as a graph model by some authors, and it is often called . Features can be real-valued -- as in @cite_29 -- or binary, as in our case, where the set of nodes exhibiting a feature is sharp, and not fuzzy. These models have been studied by different authors (such as @cite_24 ), either with known or unknown features.
{ "cite_N": [ "@cite_24", "@cite_29" ], "mid": [ "2158535911", "1996919757" ], "abstract": [ "As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. Our model combines these inferred features with known covariates in order to perform link prediction. We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets.", "We discuss a statistical model of social network data derived from matrix representations and symmetry considerations. The model can include known predictor information in the form of a regression term, and can represent additional structure via sender-specific and receiver-specific latent factors. This approach allows for the graphical description of a social network via the latent factors of the nodes, and provides a framework for the prediction of missing links in network data." ] }
1603.09540
2951720633
Besides finding trends and unveiling typical patterns, modern information retrieval is increasingly more interested in the discovery of surprising information in textual datasets. In this work we focus on finding "unexpected links" in hyperlinked document corpora when documents are assigned to categories. To achieve this goal, we model the hyperlinks graph through node categories: the presence of an arc is fostered or discouraged by the categories of the head and the tail of the arc. Specifically, we determine a latent category matrix that explains common links. The matrix is built using a margin-based online learning algorithm (Passive-Aggressive), which makes us able to process graphs with @math links in less than @math minutes. We show that our method provides better accuracy than most existing text-based techniques, with higher efficiency and relying on a much smaller amount of information. It also provides higher precision than standard link prediction, especially at low recall levels; the two methods are in fact shown to be orthogonal to each other and can therefore be fruitfully combined.
The same problem -- explaining links through features of each node -- can be casted as a Latent Dirichlet Allocation (LDA) @cite_8 problem; usually, in this context, features are the words contained in each document. For example, @cite_6 and @cite_21 build a link prediction model obtained from LDA, that considers both links and features of each node. However, the largest graphs considered in these works have about @math nodes (with @math possible features), and they do not provide running time. @cite_18 developed an LDA approach explicitly tailored for large graphs'' --- but without any external feature information for nodes; the largest graph they considered has about @math nodes and @math links, for which they report a running time of @math minutes. The algorithm we propose, although simpler, requires 9 minutes to run on a graph three orders of magnitude larger (about @math nodes and @math links).
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_6", "@cite_8" ], "mid": [ "2170246630", "2123549998", "2130978632", "1880262756" ], "abstract": [ "This paper introduces LDA-G, a scalable Bayesian approach to finding latent group structures in large real-world graph data. Existing Bayesian approaches for group discovery (such as Infinite Relational Models) have only been applied to small graphs with a couple of hundred nodes. LDA-G (short for Latent Dirichlet Allocation for Graphs) utilizes a well-known topic modeling algorithm to find latent group structure. Specifically, we modify Latent Dirichlet Allocation (LDA) to operate on graph data instead of text corpora. Our modifications reflect the differences between real-world graph data and text corpora (e.g., a node's neighbor count vs. a document's word count). In our empirical study, we apply LDA-G to several large graphs (with thousands of nodes) from PubMed (a scientific publication repository). We compare LDA-G's quantitative performance on link prediction with two existing approaches: one Bayesian (namely, Infinite Relational Model) and one non-Bayesian (namely, Cross-association). On average, LDA-G outperforms IRM by 15 and Cross-association by 25 (in terms of area under the ROC curve). Furthermore, we demonstrate that LDA-G can discover useful qualitative information.", "We develop the relational topic model (RTM), a model of documents and the links between them. For each pair of documents, the RTM models their link as a binary random variable that is conditioned on their contents. The model can be used to summarize a network of documents, predict links between them, and predict words within them. We derive efficient inference and learning algorithms based on variational methods and evaluate the predictive performance of the RTM for large networks of scientific abstracts and web documents.", "Given a large-scale linked document collection, such as a collection of blog posts or a research literature archive, there are two fundamental problems that have generated a lot of interest in the research community. One is to identify a set of high-level topics covered by the documents in the collection; the other is to uncover and analyze the social network of the authors of the documents. So far these problems have been viewed as separate problems and considered independently from each other. In this paper we argue that these two problems are in fact inter-dependent and should be addressed together. We develop a Bayesian hierarchical approach that performs topic modeling and author community discovery in one unified framework. The effectiveness of our model is demonstrated on two blog data sets in different domains and one research paper citation data from CiteSeer.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model." ] }
1603.09540
2951720633
Besides finding trends and unveiling typical patterns, modern information retrieval is increasingly more interested in the discovery of surprising information in textual datasets. In this work we focus on finding "unexpected links" in hyperlinked document corpora when documents are assigned to categories. To achieve this goal, we model the hyperlinks graph through node categories: the presence of an arc is fostered or discouraged by the categories of the head and the tail of the arc. Specifically, we determine a latent category matrix that explains common links. The matrix is built using a margin-based online learning algorithm (Passive-Aggressive), which makes us able to process graphs with @math links in less than @math minutes. We show that our method provides better accuracy than most existing text-based techniques, with higher efficiency and relying on a much smaller amount of information. It also provides higher precision than standard link prediction, especially at low recall levels; the two methods are in fact shown to be orthogonal to each other and can therefore be fruitfully combined.
Interpreting links in a network as a result of features of each node has a solid empirical background. The simple phenomenon of homophily -- i.e, links between nodes sharing the same features -- has been widely studied in social networks @cite_25 and other complex systems @cite_4 . More complex behavior, where nodes with certain features tend to connect to another type of nodes, has also proven to be greatly beneficial in analyzing real social networks. Tendencies of such kind are called and are often described by a category-category matrix @cite_20 . For example, they appeared to be a crucial factor in tracking the spread of sexual diseases @cite_5 as well as in modelling the transmission of respiratory infections @cite_32 . For this reason, such matrices are also called Who Acquires Infection From Whom'' (WAIFW) matrices, and have been empirically assessed in the field through surveys @cite_22 and with wearable sensors @cite_33 .
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_33", "@cite_32", "@cite_5", "@cite_25", "@cite_20" ], "mid": [ "2065998165", "2056279541", "2158222386", "2159301256", "1985212904", "2130354913", "" ], "abstract": [ "The fact that similarity breeds connections, the principle of homophily, has been well-studied in existing sociology literature. Several studies have observed this phenomenon by conducting surveys on human subjects. These studies have concluded that new ties are formed between similar individuals. This phenomenon has been used to explain several socio-psychological concepts such as, segregation, community development, social mobility, etc. However, due to the nature of these studies and limitations because of involvement of human subjects, conclusions from these studies are not easily extensible in online social media. Social media, which is becoming the infinite space for interactions, has exceeded all the expectations in terms of growth, for reasons beyond human mind. New ties are formed in social media in the same way that they emerge in real-world. However, given the differences between real world and online social media, do the same factors that govern the construction of new ties in real world also govern the construction of new ties in social media? In other words, does homophily exist in social media? In this article, we study this extremely significant question. We propose a systematic approach by studying three online social media sites, BlogCatalog, Last.fm, and LiveJournal and report our findings along with some interesting observations. The results indicate that the influence of interest-based homophily is not a very strong leading factor for constructing new ties specifically in the three social media sites with implications to strategic advertising, recommendations, and promoting applications at large.", "Background Until recently, mathematical models of person to person infectious diseases transmission had to make assumptions on transmissions enabled by personal contacts by estimating the so-called WAIFW-matrix. In order to better inform such estimates, a population based contact survey has been carried out in Belgium over the period March-May 2006. In contrast to other European surveys conducted simultaneously, each respondent recorded contacts over two days. Special attention was given to holiday periods, and respondents with large numbers of professional contacts.", "Background Nosocomial infections place a substantial burden on health care systems and represent one of the major issues in current public health, requiring notable efforts for its prevention. Understanding the dynamics of infection transmission in a hospital setting is essential for tailoring interventions and predicting the spread among individuals. Mathematical models need to be informed with accurate data on contacts among individuals. Methods and Findings We used wearable active Radio-Frequency Identification Devices (RFID) to detect face-to-face contacts among individuals with a spatial resolution of about 1.5 meters, and a time resolution of 20 seconds. The study was conducted in a general pediatrics hospital ward, during a one-week period, and included 119 participants, with 51 health care workers, 37 patients, and 31 caregivers. Nearly 16,000 contacts were recorded during the study period, with a median of approximately 20 contacts per participants per day. Overall, 25 of the contacts involved a ward assistant, 23 a nurse, 22 a patient, 22 a caregiver, and 8 a physician. The majority of contacts were of brief duration, but long and frequent contacts especially between patients and caregivers were also found. In the setting under study, caregivers do not represent a significant potential for infection spread to a large number of individuals, as their interactions mainly involve the corresponding patient. Nurses would deserve priority in prevention strategies due to their central role in the potential propagation paths of infections. Conclusions Our study shows the feasibility of accurate and reproducible measures of the pattern of contacts in a hospital setting. The obtained results are particularly useful for the study of the spread of respiratory infections, for monitoring critical patterns, and for setting up tailored prevention strategies. Proximity-sensing technology should be considered as a valuable tool for measuring such patterns and evaluating nosocomial prevention strategies in specific settings.", "Background Mathematical modelling of infectious diseases transmitted by the respiratory or close-contact route (e.g., pandemic influenza) is increasingly being used to determine the impact of possible interventions. Although mixing patterns are known to be crucial determinants for model outcome, researchers often rely on a priori contact assumptions with little or no empirical basis. We conducted a population-based prospective survey of mixing patterns in eight European countries using a common paper-diary methodology.", "OBJECTIVES: This study sought to define, among sexually transmitted disease (STD) clinic attendees, (1) patterns of sex partner selection, (2) relative risks for gonococcal or chlamydial infection associated with each mixing pattern, and (3) selected links and potential and actual bridge populations. METHODS: Mixing matrices were computed based on characteristics of the study participants and their partners. Risk of infection was determined in study participants with various types of partners, and odds ratios were used to estimate relative risk of infection for discordant vs concordant partnerships. RESULTS: Partnerships discordant in terms of race ethnicity, age, education, and number of partners were associated with significant risk for gonorrhea and chlamydial infection. In low-prevalence subpopulations, within-subpopulation mixing was associated with chlamydial infection, and direct links with high-prevalence subpopulations were associated with gonorrhea. CONCLUSIONS: Mixing patterns influence the ris...", "Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize...", "" ] }
1603.09405
2325716904
Neural network based approaches for sentence relation modeling automatically generate hidden matching features from raw sentence pairs. However, the quality of matching feature representation may not be satisfied due to complex semantic relations such as entailment or contradiction. To address this challenge, we propose a new deep neural network architecture that jointly leverage pre-trained word embedding and auxiliary character embedding to learn sentence meanings. The two kinds of word sequence representations as inputs into multi-layer bidirectional LSTM to learn enhanced sentence representation. After that, we construct matching features followed by another temporal CNN to learn high-level hidden matching feature representations. Experimental results demonstrate that our approach consistently outperforms the existing methods on standard evaluation datasets.
Existing neural sentence models mainly fall into two groups: convolutional neural networks (CNNs) and recurrent neural networks (RNNs). In regular 1D CNNs @cite_6 @cite_20 @cite_21 , a fixed-size window slides over time (successive words in sequence) to extract local features of a sentence; then they pool these features to a vector, usually taking the maximum value in each dimension, for supervised learning. The convolutional unit, when combined with max-pooling, can act as the compositional operator with local selection mechanism as in the recursive autoencoder @cite_11 . However, semantically related words that are not in one filter can't be captured effectively by this shallow architecture. @cite_15 built deep convolutional models so that local features can mix at high-level layers. However, deep convolutional models may result in worse performance @cite_21 .
{ "cite_N": [ "@cite_21", "@cite_6", "@cite_15", "@cite_20", "@cite_11" ], "mid": [ "1832693441", "2158899491", "2120615054", "1753482797", "71795751" ], "abstract": [ "We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.", "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.", "The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline.", "We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43 lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations.", "We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model's ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines." ] }
1603.09405
2325716904
Neural network based approaches for sentence relation modeling automatically generate hidden matching features from raw sentence pairs. However, the quality of matching feature representation may not be satisfied due to complex semantic relations such as entailment or contradiction. To address this challenge, we propose a new deep neural network architecture that jointly leverage pre-trained word embedding and auxiliary character embedding to learn sentence meanings. The two kinds of word sequence representations as inputs into multi-layer bidirectional LSTM to learn enhanced sentence representation. After that, we construct matching features followed by another temporal CNN to learn high-level hidden matching feature representations. Experimental results demonstrate that our approach consistently outperforms the existing methods on standard evaluation datasets.
On the other hand, RNN can take advantage of the parsing or dependency tree of sentence structure information @cite_11 @cite_10 . @cite_13 used dependency-tree recursive neural network to map text descriptions to quiz answers. Each node in the tree is represented as a vector; information is propagated recursively along the tree by some elaborate semantic composition. One major drawback of RNNs is the long propagation path of information near leaf nodes. As gradient may vanish when propagated through a deep path, such long dependency buries illuminating information under a complicated neural architecture, leading to the difficulty of training. To address this issue, @cite_1 proposed a Tree-Structured Long Short-Term Memory Networks. This motivates us to investigate multi-layer bidirectional LSTM that directly models sentence meanings without parsing for RTE task.
{ "cite_N": [ "@cite_13", "@cite_1", "@cite_10", "@cite_11" ], "mid": [ "2130237711", "2104246439", "", "71795751" ], "abstract": [ "Text classification methods for tasks like factoid question answering typically use manually defined string matching rules or bag of words representations. These methods are ineective when question text contains very few individual words (e.g., named entities) that are indicative of the answer. We introduce a recursive neural network (rnn) model that can reason over such input by modeling textual compositionality. We apply our model, qanta, to a dataset of questions from a trivia competition called quiz bowl. Unlike previous rnn models, qanta learns word and phrase-level representations that combine across sentences to reason about entities. The model outperforms multiple baselines and, when combined with information retrieval methods, rivals the best human players.", "Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).", "", "We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model's ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines." ] }
1603.09386
2036006510
Increasing network lifetime by reducing energy consumption across the network is one of the major concerns while designing routing protocols for Mobile Ad-Hoc Networks. In this paper, we investigate the main reasons that lead to energy depletion and we introduce appropriate routing metrics in the routing decision scheme to mitigate their effect and increase the network lifetime. For our routing scheme, we take into consideration multiple layer parameters, such as MAC queue utilization, node degree and residual energy. We integrate our multi-metric routing scheme into OLSR, a standard MANET proactive routing protocol. We evaluate via simulations in NS3 the protocol modifications under a range of different static and mobile scenarios. The main observations are that in static and low mobility scenarios our modified routing protocol leads to a significant increase (5 -20 ) in network lifetime compared to standard OLSR and slightly better performance in terms of Packet Delivery Ratio (PDR).
Many energy-efficient variations of OLSR modify both the MPR selection and the route computation algorithm. For the MPR selection, some protocols @cite_3 choose the 1-hop neighbors with the maximum residual energy, while some others modify the willingness metric of the 1-hop neighbors ( @cite_11 @cite_6 ) based on the energy level. In addition, an algorithm that takes into account the residual energy of the 1-hop and 2-hop neighbors of the MPR candidate is presented in @cite_15 . To make the routing decision, most of the techniques proposed above modify the routing metrics to take into account energy consumption. Commonly used metrics for computing path costs, such as reciprocal value of residual energy ( @cite_11 @cite_13 ) and drain rate in intermediate nodes ( @cite_11 @cite_14 ), are only based on energy measurements.
{ "cite_N": [ "@cite_14", "@cite_6", "@cite_3", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "2137501825", "", "2032502086", "2106087555", "", "2124457835" ], "abstract": [ "Mobile ad-hoc networks (MANETs) are wireless networks consisting of a collection of untethered nodes with no fixed infrastructure. An important design criterion for routing protocols in ad hoc networks is power consumption reduction. We describe an energy-efficient mechanism that can be used by a generic MANET routing protocol to prevent nodes from a sharp drop of battery power. We apply the mechanism to the dynamic source routing (DSR) and propose a novel DSR-based energy-efficient routing algorithm referred to as the energy-dependent DSR (EEDSR). We compare the EDDSR algorithm with two of the most recent proposals in this area: the least-energy aware routing (LEAR) and the minimum drain-rate (MDR) mechanism. We show that EEDSR is the best approach to reduce and balance power consumption in a wide spectrum of scenarios.", "", "Nodes in a mobile ad hoc network are typically battery-powered. Energy consumption is therefore an important metric to consider in designing routing protocols for such networks. In this paper, we present our work on integrating energy-efficiency aspects into a standard MANET routing protocol, OLSR. We study the impact of different protocol modifications that aim to increase node lifetime and network performance, and evaluate them under a range of different scenarios. For static networks, little to no performance improvements are achieved, but significant performance gains of 15 to 30 are possible for more dynamic network topologies. At the same time, not all modifications are equally beneficial. In particular, changing the MPR selection criteria, a key OLSR optimization mechanism, is not promising under any of the studied scenarios.", "Energy efficiency is a key issue in wireless ad hoc and sensor networks. Several directions have been explored to maximize network lifetime, among them energy efficient routing. In this paper, we show how to extend the standardized OLSR routing protocol, in order to make it energy efficient. To take into account residual node energy, three new selection algorithms of multipoint relays, based on the minimum residual energy are evaluated, the best one is chosen. This OLSR extension selects the path minimizing the energy consumed in the end-to-end transmission of a flow packet and avoids nodes with low residual energy. We compare this extension with a two-path source routing strategy (with different links or different nodes). An extensive performance evaluation shows that this energy efficient extension maximizes both network lifetime and user data delivered.", "", "This paper presents two novel mechanisms for the OLSR routing protocol, aiming to improve its energy performance in Mobile ah-hoc Networks. Routing protocols over MANET are an important issue and many proposals have been addressed to efficiently manage topology information, to offer network scalability and to prolong network lifetime. However, few papers consider a proactive protocol (like OLSR) to better manage the energy consumption. OLSR presents the advantage of finding a route between two nodes in the network in a very short time, thanks to its proactive scheme, but it can expend a lot of resources selecting the MultiPoint Relays (MPRs) and exchanging Topology Control information. We propose a modification in the MPR selection mechanism of OLSR protocol, based on the Willingness concept, in order to prolong the network lifetime without losses of performance (in terms of throughput, end-to-end delay or overhead). Additionally, we prove that the exclusion of the energy consumption due to the overhearing can extend the lifetime of the nodes without compromising the OLSR functioning at all. A comparison of an Energy-Efficient OLSR (EE-OLSR) and the classical OLSR protocol is performed, testing some different well-known energy aware metrics such as MTPR, CMMBCR and MDR. We notice how EE-OLSR outperforms classical OLSR, and MDR confirms to be the most performing metric to save battery energy in a dense mobile network with high traffic loads." ] }
1603.09386
2036006510
Increasing network lifetime by reducing energy consumption across the network is one of the major concerns while designing routing protocols for Mobile Ad-Hoc Networks. In this paper, we investigate the main reasons that lead to energy depletion and we introduce appropriate routing metrics in the routing decision scheme to mitigate their effect and increase the network lifetime. For our routing scheme, we take into consideration multiple layer parameters, such as MAC queue utilization, node degree and residual energy. We integrate our multi-metric routing scheme into OLSR, a standard MANET proactive routing protocol. We evaluate via simulations in NS3 the protocol modifications under a range of different static and mobile scenarios. The main observations are that in static and low mobility scenarios our modified routing protocol leads to a significant increase (5 -20 ) in network lifetime compared to standard OLSR and slightly better performance in terms of Packet Delivery Ratio (PDR).
Multiple metrics routing schemes, which are more related to our work, have also been proposed. An adaptive multiple metrics routing scheme for AODV is introduced in @cite_7 . The authors take into account three routing metrics, which are hop count, traffic load and energy cost, and combine them to evaluate the cost of the paths. In addition, a predictive multiple metrics routing scheme for proactive protocols, like OLSR, is presented in @cite_12 . The chosen routing metrics are mean queueing delay, energy cost and residual link lifetime. The authors designed a multi-objective routing schemes that evaluates the multiple metrics in a composite way and achieves better performance in terms of Packet Delivery Ratio (PDR) and network lifetime.
{ "cite_N": [ "@cite_12", "@cite_7" ], "mid": [ "2145947253", "2106034280" ], "abstract": [ "In this paper we present a MANET proactive routing enhancement scheme by comprehensive evaluation of multiple dynamic routing metrics, including delay, energy cost, and link stability. We developed efficient routing metric prediction methods: predicting delay and energy using double exponential smoothing, and predicting link stability using a heuristic based on the normal-like distributions of the link lifetimes in typical MANET mobility scenarios. The routing metrics predictions are incorporated with regular routing information exchanges. On each node, routing table is constructed by a modified version of Dijkstra's algorithm, which evaluates the predicted metrics values compositively. We integrated such a multi-metric prediction evaluation mechanism into OLSR and name it OLSR MC. We show by simulation that OLSR MC is more adaptive to the network dynamics and therefore is able to improve performance significantly on multiple routing objectives, including higher packet delivery ratio, shorter average end-to- end delay, and prolonged network energy lifetime.", "The calculation of path cost is a critical component of route discovery for network routing. The criteria used to represent path cost guides resource consumption in the network. In this paper, we describe our approach, a set of protocols based on our Multiple Metrics Routing Protocol (MMRP) for integrating hop count, energy consumption, and traffic load into the path cost calculation for ad hoc or multihop-cellular networks. Our initial aim is to select among multiple disjoint routes in order to maintain a low path cost, in terms of energy consumption and delay, without depleting resources at popular intermediate nodes. One extension of MMRP removes the constraint that only disjoint paths are considered and enables discovery of more optimal routes. A second extension includes adaptive adjustment of cost metrics to support device classification (e.g., energy capacity, bandwidth) in heterogeneous networks. We illustrate our approach with a simple example, followed by extensive simulation analysis. Results indicate that proper combination of multiple metrics for calculating path costs results in improved performance and lower overall system resource consumption as compared to AODV or energy efficient routing protocols." ] }
1603.09290
2950538808
Optimizing floating-point arithmetic is vital because it is ubiquitous, costly, and used in compute-heavy workloads. Implementing precise optimizations correctly, however, is difficult, since developers must account for all the esoteric properties of floating-point arithmetic to ensure that their transformations do not alter the output of a program. Manual reasoning is error prone and stifles incorporation of new optimizations. We present an approach to automate reasoning about floating-point optimizations using satisfiability modulo theories (SMT) solvers. We implement the approach in LifeJacket, a system for automatically verifying precise floating-point optimizations for the LLVM assembly language. We have used LifeJacket to verify 43 LLVM optimizations and to discover eight incorrect ones, including three previously unreported problems. LifeJacket is an open source extension of the Alive system for optimization verification.
Research on compiler correctness has addressed floating-point and floating-point optimizations. CompCert, a formally-verified compiler, supports IEEE 754-2008 floating-point types and implements two floating-point optimizations @cite_10 . In CompCert, developers use Coq to prove optimizations correct, while proves optimization correctness automatically.
{ "cite_N": [ "@cite_10" ], "mid": [ "1988579293" ], "abstract": [ "Floating-point arithmetic is known to be tricky: roundings, formats, exceptional values. The IEEE-754 standard was a push towards straightening the field and made formal reasoning about floating-point computations easier and flourishing. Unfortunately, this is not sufficient to guarantee the final result of a program, as several other actors are involved: programming language, compiler, and architecture. The CompCert formally-verified compiler provides a solution to this problem: this compiler comes with a mathematical specification of the semantics of its source language (a large subset of ISO C99) and target platforms (ARM, PowerPC, x86-SSE2), and with a proof that compilation preserves semantics. In this paper, we report on our recent success in formally specifying and proving correct CompCert's compilation of floating-point arithmetic. Since CompCert is verified using the Coq proof assistant, this effort required a suitable Coq formalization of the IEEE-754 standard; we extended the Flocq library for this purpose. As a result, we obtain the first formally verified compiler that provably preserves the semantics of floating-point programs." ] }
1603.09290
2950538808
Optimizing floating-point arithmetic is vital because it is ubiquitous, costly, and used in compute-heavy workloads. Implementing precise optimizations correctly, however, is difficult, since developers must account for all the esoteric properties of floating-point arithmetic to ensure that their transformations do not alter the output of a program. Manual reasoning is error prone and stifles incorporation of new optimizations. We present an approach to automate reasoning about floating-point optimizations using satisfiability modulo theories (SMT) solvers. We implement the approach in LifeJacket, a system for automatically verifying precise floating-point optimizations for the LLVM assembly language. We have used LifeJacket to verify 43 LLVM optimizations and to discover eight incorrect ones, including three previously unreported problems. LifeJacket is an open source extension of the Alive system for optimization verification.
Regarding optimization correctness, researchers have explored both the consequences of existing optimizations and techniques for generating new optimizations. Recent work has discussed consequences of unexpected optimizations @cite_5 . In terms of new optimizations, STOKE @cite_6 is a stochastic optimizer that supports floating-point arithmetic and verifies instances of floating-point optimizations with random testing. Souper @cite_13 discovers new LLVM peephole optimizations using an SMT solver. Similarly, Optgen generates peephole optimizations and verifies them using an SMT solver @cite_12 . All of these approaches are concerned with the correctness of new optimizations, while our work focuses on existing ones. Vellvm, a framework for verifying LLVM optimizations and transformations using Coq, also operates on existing transformations but does not do automatic reasoning.
{ "cite_N": [ "@cite_5", "@cite_13", "@cite_12", "@cite_6" ], "mid": [ "1978364288", "", "364774736", "2121344286" ], "abstract": [ "This paper studies an emerging class of software bugs called optimization-unstable code: code that is unexpectedly discarded by compiler optimizations due to undefined behavior in the program. Unstable code is present in many systems, including the Linux kernel and the Postgres database. The consequences of unstable code range from incorrect functionality to missing security checks. To reason about unstable code, this paper proposes a novel model, which views unstable code in terms of optimizations that leverage undefined behavior. Using this model, we introduce a new static checker called Stack that precisely identifies unstable code. Applying Stack to widely used systems has uncovered 160 new bugs that have been confirmed and fixed by developers.", "", "Every compiler comes with a set of local optimization rules, such as x + 0 → x and x & x → x, that do not require any global analysis. These rules reflect the wisdom of the compiler developers about mathematical identities that hold for the operations of their intermediate representation. Unfortunately, these sets of hand-crafted rules guarantee neither correctness nor completeness. Optgen solves this problem by generating all local optimizations up to a given cost limit. Since Optgen verifies each rule using an SMT solver, it guarantees correctness and completeness of the generated rule set. Using Optgen, we tested the latest versions of gcc, icc and llvm and identified more than 50 missing local optimizations that involve only two operations.", "The aggressive optimization of floating-point computations is an important problem in high-performance computing. Unfortunately, floating-point instruction sets have complicated semantics that often force compilers to preserve programs as written. We present a method that treats floating-point optimization as a stochastic search problem. We demonstrate the ability to generate reduced precision implementations of Intel's handwritten C numeric library which are up to 6 times faster than the original code, and achieve end-to-end speedups of over 30 on a direct numeric simulation and a ray tracer by optimizing kernels that can tolerate a loss of precision while still remaining correct. Because these optimizations are mostly not amenable to formal verification using the current state of the art, we present a stochastic search technique for characterizing maximum error. The technique comes with an asymptotic guarantee and provides strong evidence of correctness." ] }
1603.09290
2950538808
Optimizing floating-point arithmetic is vital because it is ubiquitous, costly, and used in compute-heavy workloads. Implementing precise optimizations correctly, however, is difficult, since developers must account for all the esoteric properties of floating-point arithmetic to ensure that their transformations do not alter the output of a program. Manual reasoning is error prone and stifles incorporation of new optimizations. We present an approach to automate reasoning about floating-point optimizations using satisfiability modulo theories (SMT) solvers. We implement the approach in LifeJacket, a system for automatically verifying precise floating-point optimizations for the LLVM assembly language. We have used LifeJacket to verify 43 LLVM optimizations and to discover eight incorrect ones, including three previously unreported problems. LifeJacket is an open source extension of the Alive system for optimization verification.
Researchers have explored debugging floating-point accuracy @cite_8 and improving the accuracy of floating-point expressions @cite_2 . These efforts are more closely related to imprecise optimizations and provide techniques that could be used to analyze them. Z3's support for reasoning about floating-point arithmetic relies on a model construction procedure instead of naive bit-blasting @cite_11 .
{ "cite_N": [ "@cite_11", "@cite_2", "@cite_8" ], "mid": [ "199478921", "2061091230", "2122738744" ], "abstract": [ "We consider the problem of efficiently computing models for satisfiable constraints, in the presence of complex background theories such as floating-point arithmetic. Model construction has various applications, for instance the automatic generation of test inputs. It is well-known that naive encoding of constraints into simpler theories (for instance, bit-vectors or propositional logic) can lead to a drastic increase in size, and be unsatisfactory in terms of memory and runtime needed for model construction. We define a framework for systematic application of approximations in order to speed up model construction. Our method is more general than previous techniques in the sense that approximations that are neither under- nor over-approximations can be used, and shows promising results in practice.", "Scientific and engineering applications depend on floating point arithmetic to approximate real arithmetic. This approximation introduces rounding error, which can accumulate to produce unacceptable results. While the numerical methods literature provides techniques to mitigate rounding error, applying these techniques requires manually rearranging expressions and understanding the finer details of floating point arithmetic. We introduce Herbie, a tool which automatically discovers the rewrites experts perform to improve accuracy. Herbie's heuristic search estimates and localizes rounding error using sampled points (rather than static error analysis), applies a database of rules to generate improvements, takes series expansions, and combines improvements for different input regions. We evaluated Herbie on examples from a classic numerical methods textbook, and found that Herbie was able to improve accuracy on each example, some by up to 60 bits, while imposing a median performance overhead of 40 . Colleagues in machine learning have used Herbie to significantly improve the results of a clustering algorithm, and a mathematical library has accepted two patches generated using Herbie.", "Tools for floating-point error estimation are fundamental to program understanding and optimization. In this paper, we focus on tools for determining the input settings to a floating point routine that maximizes its result error. Such tools can help support activities such as precision allocation, performance optimization, and auto-tuning. We benchmark current abstraction-based precision analysis methods, and show that they often do not work at scale, or generate highly pessimistic error estimates, often caused by non-linear operators or complex input constraints that define the set of legal inputs. We show that while concrete-testing-based error estimation methods based on maintaining shadow values at higher precision can search out higher error-inducing inputs, suit able heuristic search guidance is key to finding higher errors. We develop a heuristic search algorithm called Binary Guided Random Testing (BGRT). In 45 of the 48 total benchmarks, including many real-world routines, BGRT returns higher guaranteed errors. We also evaluate BGRT against two other heuristic search methods called ILS and PSO, obtaining better results." ] }
1603.09029
2552782214
We study the worst-case adaptive optimization problem with budget constraint that is useful for modeling various practical applications in artificial intelligence and machine learning. We investigate the near-optimality of greedy algorithms for this problem with both modular and non-modular cost functions. In both cases, we prove that two simple greedy algorithms are not near-optimal but the best between them is near-optimal if the utility function satisfies pointwise submodularity and pointwise cost-sensitive submodularity respectively. This implies a combined algorithm that is near-optimal with respect to the optimal algorithm that uses half of the budget. We discuss applications of our theoretical results and also report experiments comparing the greedy algorithms on the active learning problem.
Our work is related to @cite_8 @cite_6 @cite_9 @cite_0 , but we consider a more general case than these works. considered a similar worst-case setting as our work, but they assumed the utility is pointwise submodular and the cost function is uniform modular. Our work is more general than theirs in two aspects. First, pointwise cost-sensitive submodularity is a generalization of pointwise submodularity. Second, our cost function is general and may be neither uniform nor modular. These generalizations make the problem more complicated as we have shown in Section that simple greedy policies, which are near-optimal in the uniform modular cost setting @cite_0 , will not be near-optimal anymore. Thus, we need to combine two simple greedy policies to obtain a new near-optimal policy.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_6", "@cite_8" ], "mid": [ "2200983898", "2951839510", "2141403143", "" ], "abstract": [ "We study simultaneous learning and covering problems: submodular set cover problems that depend on the solution to an active (query) learning problem. The goal is to jointly minimize the cost of both learning and covering. We extend recent work in this setting to allow for a limited amount of adversarial noise. Certain noisy query learning problems are a special case of our problem. Crucial to our analysis is a lemma showing the logical OR of two submodular cover constraints can be reduced to a single submodular set cover constraint. Combined with known results, this new lemma allows for arbitrary monotone circuits of submodular cover constraints to be reduced to a single constraint. As an example practical application, we present a movie recommendation website that minimizes the total cost of learning what the user wants to watch and recommending a set of movies.", "We introduce a natural generalization of submodular set cover and exact active learning with a finite hypothesis class (query learning). We call this new problem interactive submodular set cover. Applications include advertising in social networks with hidden information. We give an approximation guarantee for a novel greedy algorithm and give a hardness of approximation result which matches up to constant factors. We also discuss negative results for simpler approaches and present encouraging early experimental results.", "Given a water distribution network, where should we place sensors toquickly detect contaminants? Or, which blogs should we read to avoid missing important stories?. These seemingly different problems share common structure: Outbreak detection can be modeled as selecting nodes (sensor locations, blogs) in a network, in order to detect the spreading of a virus or information asquickly as possible. We present a general methodology for near optimal sensor placement in these and related problems. We demonstrate that many realistic outbreak detection objectives (e.g., detection likelihood, population affected) exhibit the property of \"submodularity\". We exploit submodularity to develop an efficient algorithm that scales to large problems, achieving near optimal placements, while being 700 times faster than a simple greedy algorithm. We also derive online bounds on the quality of the placements obtained by any algorithm. Our algorithms and bounds also handle cases where nodes (sensor locations, blogs) have different costs. We evaluate our approach on several large real-world problems,including a model of a water distribution network from the EPA, andreal blog data. The obtained sensor placements are provably near optimal, providing a constant fraction of the optimal solution. We show that the approach scales, achieving speedups and savings in storage of several orders of magnitude. We also show how the approach leads to deeper insights in both applications, answering multicriteria trade-off, cost-sensitivity and generalization questions.", "" ] }
1603.09029
2552782214
We study the worst-case adaptive optimization problem with budget constraint that is useful for modeling various practical applications in artificial intelligence and machine learning. We investigate the near-optimality of greedy algorithms for this problem with both modular and non-modular cost functions. In both cases, we prove that two simple greedy algorithms are not near-optimal but the best between them is near-optimal if the utility function satisfies pointwise submodularity and pointwise cost-sensitive submodularity respectively. This implies a combined algorithm that is near-optimal with respect to the optimal algorithm that uses half of the budget. We discuss applications of our theoretical results and also report experiments comparing the greedy algorithms on the active learning problem.
Cost-sensitive submodularity is a generalization of submodularity @cite_11 for general costs. Submodularity has been successfully applied to many applications @cite_1 @cite_10 @cite_12 . There are other ways to extend submodularity to the adaptive setting, e.g., adaptive submodularity @cite_15 and approximately adaptive submodularity @cite_7 . When the utility is adaptive submodular, proved that the greedy policy that maximizes the average utility gain in each step is near-optimal in both the average and worst cases. However, neither pointwise submodularity implies adaptive submodularity nor vice versa. Thus, our assumptions in this paper, which is more general than the pointwise submodularity assumption, can be applied to a different class of utility functions than those in @cite_15 .
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_15", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2236345491", "2021774297", "2962795549", "2073110021", "1912128066", "1997783781" ], "abstract": [ "A wide range of AI problems, such as sensor placement, active learning, and network influence maximization, require sequentially selecting elements from a large set with the goal of optimizing the utility of the selected subset. Moreover, each element that is picked may provide stochastic feedback, which can be used to make smarter decisions about future selections. Finding efficient policies for this general class of adaptive optimization problems can be extremely hard. However, when the objective function is adaptive monotone and adaptive submodular, a simple greedy policy attains a 1 - 1 e approximation ratio in terms of expected utility. Unfortunately, many practical objective functions are naturally non-monotone; to our knowledge, no existing policy has provable performance guarantees when the assumption of adaptive monotonicity is lifted. We propose the adaptive random greedy policy for maximizing adaptive submodular functions, and prove that it retains the aforementioned 1 - 1 e approximation ratio for functions that are also adaptive monotone, while it additionally provides a 1 e approximation ratio for nonmonotone adaptive submodular functions. We showcase the benefits of adaptivity on three real-world network data sets using two non-monotone functions, representative of two classes of commonly encountered non-monotone objectives.", "When monitoring spatial phenomena, such as the ecological condition of a river, deciding where to make observations is a challenging task. In these settings, a fundamental question is when an active learning, or sequential design, strategy, where locations are selected based on previous measurements, will perform significantly better than sensing at an a priori specified set of locations. For Gaussian Processes (GPs), which often accurately model spatial phenomena, we present an analysis and efficient algorithms that address this question. Central to our analysis is a theoretical bound which quantifies the performance difference between active and a priori design strategies. We consider GPs with unknown kernel parameters and present a nonmyopic approach for trading off exploration, i.e., decreasing uncertainty about the model parameters, and exploitation, i.e., near-optimally selecting observations when the parameters are (approximately) known. We discuss several exploration strategies, and present logarithmic sample complexity bounds for the exploration phase. We then extend our algorithm to handle nonstationary GPs exploiting local structure in the model. We also present extensive empirical evaluation on several real-world problems.", "Many problems in artificial intelligence require adaptively making a sequence of decisions with uncertain outcomes under partial observability. Solving such stochastic optimization problems is a fundamental but notoriously difficult challenge. In this paper, we introduce the concept of adaptive submodularity, generalizing submodular set functions to adaptive policies. We prove that if a problem satisfies this property, a simple adaptive greedy algorithm is guaranteed to be competitive with the optimal policy. In addition to providing performance guarantees for both stochastic maximization and coverage, adaptive submodularity can be exploited to drastically speed up the greedy algorithm by using lazy evaluations. We illustrate the usefulness of the concept by giving several examples of adaptive submodular objectives arising in diverse AI applications including management of sensing resources, viral marketing and active learning. Proving adaptive submodularity for these problems allows us to recover existing results in these applications as special cases, improve approximation guarantees and handle natural generalizations.", "Where should we place sensors to efficiently monitor natural drinking water resources for contamination? Which blogs should we read to learn about the biggest stories on the Web? These problems share a fundamental challenge: How can we obtain the most useful information about the state of the world, at minimum cost? Such information gathering, or active learning, problems are typically NP-hard, and were commonly addressed using heuristics without theoretical guarantees about the solution quality. In this article, we describe algorithms which efficiently find provably near-optimal solutions to large, complex information gathering problems. Our algorithms exploit submodularity, an intuitive notion of diminishing returns common to many sensing problems: the more sensors we have already deployed, the less we learn by placing another sensor. In addition to identifying the most informative sensing locations, our algorithms can handle more challenging settings, where sensors need to be able to reliably communicate over lossy links, where mobile robots are used for collecting data, or where solutions need to be robust against adversaries and sensor failures. We also present results applying our algorithms to several real-world sensing tasks, including environmental monitoring using robotic sensors, activity recognition using a built sensing chair, a sensor placement challenge, and deciding which blogs to read on the Web.", "We study the problem of selecting a subset of big data to train a classifier while incurring minimal performance loss. We show the connection of submodularity to the data likelihood functions for Naive Bayes (NB) and Nearest Neighbor (NN) classifiers, and formulate the data subset selection problems for these classifiers as constrained submodular maximization. Furthermore, we apply this framework to active learning and propose a novel scheme called filtered active submodular selection (FASS), where we combine the uncertainty sampling method with a submodular data subset selection framework. We extensively evaluate the proposed framework on text categorization and handwritten digit recognition tasks with four different classifiers, including deep neural network (DNN) based classifiers. Empirical results indicate that the proposed framework yields significant improvement over the state-of-the-art algorithms on all classifiers.", "A real-valued function z whose domain is all of the subsets of N = 1,..., n is said to be submodular if zS + zT ≥ zS ∪ T + zS ∩ T, ∀S, T ⊆ N, and nondecreasing if zS ≤ zT, ∀S ⊂ T ⊆ N. We consider the problem maxS⊂N zS: |S| ≤ K, z submodular and nondecreasing, zO = 0 . Many combinatorial optimization problems can be posed in this framework. For example, a well-known location problem and the maximization of certain boolean polynomials are in this class. We present a family of algorithms that involve the partial enumeration of all sets of cardinality q and then a greedy selection of the remaining elements, q = 0,..., K-1. For fixed K, the qth member of this family requires Onq+1 computations and is guaranteed to achieve at least @math Our main result is that this is the best performance guarantee that can be obtained by any algorithm whose number of computations does not exceed Onq+1." ] }
1603.09029
2552782214
We study the worst-case adaptive optimization problem with budget constraint that is useful for modeling various practical applications in artificial intelligence and machine learning. We investigate the near-optimality of greedy algorithms for this problem with both modular and non-modular cost functions. In both cases, we prove that two simple greedy algorithms are not near-optimal but the best between them is near-optimal if the utility function satisfies pointwise submodularity and pointwise cost-sensitive submodularity respectively. This implies a combined algorithm that is near-optimal with respect to the optimal algorithm that uses half of the budget. We discuss applications of our theoretical results and also report experiments comparing the greedy algorithms on the active learning problem.
The adaptive optimization problem in our paper is for the worst-case maximum coverage setting, where we look for a policy that maximizes the utility while maintaining its cost within a certain budget. This problem can also be considered in the worst-case min-cost setting @cite_9 , where we look for a policy that can achieve at least a certain value of utility while minimizing the total cost.
{ "cite_N": [ "@cite_9" ], "mid": [ "2951839510" ], "abstract": [ "We introduce a natural generalization of submodular set cover and exact active learning with a finite hypothesis class (query learning). We call this new problem interactive submodular set cover. Applications include advertising in social networks with hidden information. We give an approximation guarantee for a novel greedy algorithm and give a hardness of approximation result which matches up to constant factors. We also discuss negative results for simpler approaches and present encouraging early experimental results." ] }
1603.09035
2326433089
Latency to end-users and regulatory requirements push large companies to build data centers all around the world. The resulting data is "born" geographically distributed. On the other hand, many machine learning applications require a global view of such data in order to achieve the best results. These types of applications form a new class of learning problems, which we call Geo-Distributed Machine Learning (GDML). Such applications need to cope with: 1) scarce and expensive cross-data center bandwidth, and 2) growing privacy concerns that are pushing for stricter data sovereignty regulations. Current solutions to learning from geo-distributed data sources revolve around the idea of first centralizing the data in one data center, and then training locally. As machine learning algorithms are communication-intensive, the cost of centralizing the data is thought to be offset by the lower cost of intra-data center communication during training. In this work, we show that the current centralized practice can be far from optimal, and propose a system for doing geo-distributed training. Furthermore, we argue that the geo-distributed approach is structurally more amenable to dealing with regulatory constraints, as raw data never leaves the source data center. Our empirical evaluation on three real datasets confirms the general validity of our approach, and shows that GDML is not only possible but also advisable in many scenarios.
Prior work on systems that deal with geographically distributed datasets exists in the literature. The work done by @cite_14 @cite_5 poses the thesis that increasing global data and scarce WAN bandwidth, coupled with regulatory concerns, will derail large companies from executing centralized analytics processes. They propose a system that supports SQL queries for doing X-DC analytics. Unlike our work, they do not target iterative machine learning workflows, neither focus on jobs latency. They mainly care about reducing X-DC data transfer volume.
{ "cite_N": [ "@cite_5", "@cite_14" ], "mid": [ "1230574503", "2295862636" ], "abstract": [ "Global-scale organizations produce large volumes of data across geographically distributed data centers. Querying and analyzing such data as a whole introduces new research issues at the intersection of networks and databases. Today systems that compute SQL analytics over geographically distributed data operate by pulling all data to a central location. This is problematic at large data scales due to expensive transoceanic links, and may be rendered impossible by emerging regulatory constraints. The new problem of Wide-Area Big Data (WABD) consists in orchestrating query execution across data centers to minimize bandwidth while respecting regulatory constaints. WABD combines classical query planning with novel network-centric mechanisms designed for a wide-area setting such as pseudodistributed execution, joint query optimization, and deltas on cached subquery results. Our prototype, Geode, builds upon Hive and uses 250× less bandwidth than centralized analytics in a Microsoft production workload and up to 360× less on popular analytics benchmarks including TPC-CH and Berkeley Big Data. Geode supports all SQL operators, including Joins, across global data.", "Large organizations today operate data centers around the globe where massive amounts of data are produced and consumed by local users. Despite their geographically diverse origin, such data must be analyzed mined as a whole. We call the problem of supporting rich DAGs of computation across geographically distributed data Wide-Area Big-Data (WABD). To the best of our knowledge, WABD is not supported by currently deployed systems nor suciently studied in literature; it is addressed today by continuously copying raw data to a central location for analysis. We observe from production workloads that WABD is important for large organizations, and that centralized solutions incur substantial cross-data center network costs. We argue that these trends will only worsen as the gap between data volumes and transoceanic bandwidth widens. Further, emerging concerns over data sovereignty and privacy may trigger government regulations that can threaten the very viability of centralized solutions. To address WABD we propose WANalytics, a system that pushes computation to edge data centers, automatically optimizing workow execution plans and replicating data when needed. Our Hadoop-based prototype delivers 257 reduction in WAN bandwidth on a production workload from Microsoft. We round out our evaluation by also demonstrating substantial gains for three standard benchmarks: TPC-CH, Berkeley Big Data, and BigBench." ] }
1603.09035
2326433089
Latency to end-users and regulatory requirements push large companies to build data centers all around the world. The resulting data is "born" geographically distributed. On the other hand, many machine learning applications require a global view of such data in order to achieve the best results. These types of applications form a new class of learning problems, which we call Geo-Distributed Machine Learning (GDML). Such applications need to cope with: 1) scarce and expensive cross-data center bandwidth, and 2) growing privacy concerns that are pushing for stricter data sovereignty regulations. Current solutions to learning from geo-distributed data sources revolve around the idea of first centralizing the data in one data center, and then training locally. As machine learning algorithms are communication-intensive, the cost of centralizing the data is thought to be offset by the lower cost of intra-data center communication during training. In this work, we show that the current centralized practice can be far from optimal, and propose a system for doing geo-distributed training. Furthermore, we argue that the geo-distributed approach is structurally more amenable to dealing with regulatory constraints, as raw data never leaves the source data center. Our empirical evaluation on three real datasets confirms the general validity of our approach, and shows that GDML is not only possible but also advisable in many scenarios.
@cite_37 proposes a low-latency distributed analytics system called Iridium. Similar to , they focus on pure data analytics and not on machine learning tasks. Another key difference is that Iridium optimizes task and data placement across sites to minimize query response time, while our system respects stricter sovereignty constraints and does not move raw data around.
{ "cite_N": [ "@cite_37" ], "mid": [ "1969299781" ], "abstract": [ "Low latency analytics on geographically distributed datasets (across datacenters, edge clusters) is an upcoming and increasingly important challenge. The dominant approach of aggregating all the data to a single datacenter significantly inflates the timeliness of analytics. At the same time, running queries over geo-distributed inputs using the current intra-DC analytics frameworks also leads to high query response times because these frameworks cannot cope with the relatively low and variable capacity of WAN links. We present Iridium, a system for low latency geo-distributed analytics. Iridium achieves low query response times by optimizing placement of both data and tasks of the queries. The joint data and task placement optimization, however, is intractable. Therefore, Iridium uses an online heuristic to redistribute datasets among the sites prior to queries' arrivals, and places the tasks to reduce network bottlenecks during the query's execution. Finally, it also contains a knob to budget WAN usage. Evaluation across eight worldwide EC2 regions using production queries show that Iridium speeds up queries by 3× -- 19× and lowers WAN usage by 15 -- 64 compared to existing baselines." ] }
1603.09240
2327816982
Multi-object tracking has been studied for decades. However, when it comes to tracking pedestrians in extremely crowded scenes, we are limited to only few works. This is an important problem which gives rise to several challenges. Pre-trained object detectors fail to localize targets in crowded sequences. This consequently limits the use of data-association based multi-target tracking methods which rely on the outcome of an object detector. Additionally, the small apparent target size makes it challenging to extract features to discriminate targets from their surroundings. Finally, the large number of targets greatly increases computational complexity which in turn makes it hard to extend existing multi-target tracking approaches to high-density crowd scenarios. In this paper, we propose a tracker that addresses the aforementioned problems and is capable of tracking hundreds of people efficiently. We formulate online crowd tracking as Binary Quadratic Programing. Our formulation employs target's individual information in the form of appearance and motion as well as contextual cues in the form of neighborhood motion, spatial proximity and grouping, and solves detection and data association simultaneously. In order to solve the proposed quadratic optimization efficiently, where state-of art commercial quadratic programing solvers fail to find the solution in a reasonable amount of time, we propose to use the most recent version of the Modified Frank Wolfe algorithm, which takes advantage of SWAP-steps to speed up the optimization. We show that the proposed formulation can track hundreds of targets efficiently and improves state-of-art results by significant margins on eleven challenging high density crowd sequences.
Multiple target tracking is one of the fundamental problems in computer vision. Most prior works have focused on low and medium density crowd sequences @cite_47 @cite_31 @cite_43 @cite_22 @cite_32 @cite_5 @cite_50 , where the goal is to design a better data association technique. Authors in @cite_22 formulate data association as maximum weight independent set. Many successful data association based trackers utilize network flow to formulate tracking @cite_33 @cite_19 @cite_52 . The solution to network flow can be found efficiently using linear programing @cite_52 or a dynamic programing @cite_19 . Authors in @cite_11 @cite_50 @cite_16 formulate data association as maximum clique problem. All of these methods assume that, the detections in each frame are already given. This requires having a good pre-trained object detector @cite_4 @cite_1 that works reasonably well.
{ "cite_N": [ "@cite_31", "@cite_4", "@cite_22", "@cite_33", "@cite_1", "@cite_32", "@cite_52", "@cite_43", "@cite_19", "@cite_50", "@cite_5", "@cite_47", "@cite_16", "@cite_11" ], "mid": [ "2055022211", "2168356304", "1966136723", "2148442626", "", "2115734113", "2171243491", "2163937424", "2016135469", "", "", "2116227586", "", "1932380673" ], "abstract": [ "Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "This paper addresses the problem of simultaneous tracking of multiple targets in a video. We first apply object detectors to every video frame. Pairs of detection responses from every two consecutive frames are then used to build a graph of tracklets. The graph helps transitively link the best matching tracklets that do not violate hard and soft contextual constraints between the resulting tracks. We prove that this data association problem can be formulated as finding the maximum-weight independent set (MWIS) of the graph. We present a new, polynomial-time MWIS algorithm, and prove that it converges to an optimum. Similarity and contextual constraints between object detections, used for data association, are learned online from object appearance and motion properties. Long-term occlusions are addressed by iteratively repeating MWIS to hierarchically merge smaller tracks into longer ones. Our results demonstrate advantages of simultaneously accounting for soft and hard contextual constraints in multitarget tracking. We outperform the state of the art on the benchmark datasets.", "We present a novel approach for multi-object tracking which considers object detection and spacetime trajectory estimation as a coupled optimization problem. It is formulated in a hypothesis selection framework and builds upon a state-of-the-art pedestrian detector. At each time instant, it searches for the globally optimal set of spacetime trajectories which provides the best explanation for the current image and for all evidence collected so far, while satisfying the constraints that no two objects may occupy the same physical space, nor explain the same image pixels at any point in time. Successful trajectory hypotheses are fed back to guide object detection in future frames. The optimization procedure is kept efficient through incremental computation and conservative hypothesis pruning. The resulting approach can initialize automatically and track a large and varying number of persons over long periods and through complex scenes with clutter, occlusions, and large-scale background changes. Also, the global optimization framework allows our system to recover from mismatches and temporarily lost tracks. We demonstrate the feasibility of the proposed approach on several challenging video sequences.", "", "We present an iterative approximate solution to the multidimensional assignment problem under general cost functions. The method maintains a feasible solution at every step, and is guaranteed to converge. It is similar to the iterated conditional modes (ICM) algorithm, but applied at each step to a block of variables representing correspondences between two adjacent frames, with the optimal conditional mode being calculated exactly as the solution to a two-frame linear assignment problem. Experiments with ground-truthed trajectory data show that the method outperforms both network-flow data association and greedy recursive filtering using a constant velocity motion model.", "Multi-object tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a false-positive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization results in a convex problem. We take advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast. This new approach is far simpler formally and algorithmically than existing techniques and lets us demonstrate excellent performance in two very different contexts.", "Multi-target tracking is an interesting but challenging task in computer vision field. Most previous data association based methods merely consider the relationships (e.g. appearance and motion pattern similarities) between detections in local limited temporal domain, leading to their difficulties in handling long-term occlusion and distinguishing the spatially close targets with similar appearance in crowded scenes. In this paper, a novel data association approach based on undirected hierarchical relation hypergraph is proposed, which formulates the tracking task as a hierarchical dense neighborhoods searching problem on the dynamically constructed undirected affinity graph. The relationships between different detections across the spatiotemporal domain are considered in a high-order way, which makes the tracker robust to the spatially close targets with similar appearance. Meanwhile, the hierarchical design of the optimization process fuels our tracker to long-term occlusion with more robustness. Extensive experiments on various challenging datasets (i.e. PETS2009 dataset, ParkingLot), including both low and high density sequences, demonstrate that the proposed method performs favorably against the state-of-the-art methods.", "We analyze the computational problem of multi-object tracking in video sequences. We formulate the problem using a cost function that requires estimating the number of tracks, as well as their birth and death states. We show that the global solution can be obtained with a greedy algorithm that sequentially instantiates tracks using shortest path computations on a flow network. Greedy algorithms allow one to embed pre-processing steps, such as nonmax suppression, within the tracking algorithm. Furthermore, we give a near-optimal algorithm based on dynamic programming which runs in time linear in the number of objects and linear in the sequence length. Our algorithms are fast, simple, and scalable, allowing us to process dense input data. This results in state-of-the-art performance.", "", "", "Given a set of plausible detections, detected at each time instant independently, we investigate how to associate them across time. This is done by propagating labels on a set of graphs that capture how the spatio-temporal and the appearance cues promote the assignment of identical or distinct labels to a pair of nodes. The graph construction is driven by the locally linear embedding (LLE) of either the spatio-temporal or the appearance features associated to the detections. Interestingly, the neighborhood of a node in each appearance graph is defined to include all nodes for which the appearance feature is available (except the ones that coexist at the same time). This allows to connect the nodes that share the same appearance even if they are temporally distant, which gives our framework the uncommon ability to exploit the appearance features that are available only sporadically along the sequence of detections. Once the graphs have been defined, the multi-object tracking is formulated as the problem of finding a label assignment that is consistent with the constraints captured by each of the graphs. This results into a difference of convex program that can be efficiently solved. Experiments are performed on a basketball and several well-known pedestrian datasets in order to validate the effectiveness of the proposed solution.", "", "Data association is the backbone to many multiple object tracking (MOT) methods. In this paper we formulate data association as a Generalized Maximum Multi Clique problem (GMMCP). We show that this is the ideal case of modeling tracking in real world scenario where all the pairwise relationships between targets in a batch of frames are taken into account. Previous works assume simplified version of our tracker either in problem formulation or problem optimization. However, we propose a solution using GMMCP where no simplification is assumed in either steps. We show that the NP hard problem of GMMCP can be formulated through Binary-Integer Program where for small and medium size MOT problems the solution can be found efficiently. We further propose a speed-up method, employing Aggregated Dummy Nodes for modeling occlusion and miss-detection, which reduces the size of the input graph without using any heuristics. We show that, using the speedup method, our tracker lends itself to real-time implementation which is plausible in many applications. We evaluated our tracker on six challenging sequences of Town Center, TUD-Crossing, TUD-Stadtmitte, Parking-lot 1, Parking-lot 2 and Parking-lot pizza and show favorable improvement against state of art." ] }
1603.09240
2327816982
Multi-object tracking has been studied for decades. However, when it comes to tracking pedestrians in extremely crowded scenes, we are limited to only few works. This is an important problem which gives rise to several challenges. Pre-trained object detectors fail to localize targets in crowded sequences. This consequently limits the use of data-association based multi-target tracking methods which rely on the outcome of an object detector. Additionally, the small apparent target size makes it challenging to extract features to discriminate targets from their surroundings. Finally, the large number of targets greatly increases computational complexity which in turn makes it hard to extend existing multi-target tracking approaches to high-density crowd scenarios. In this paper, we propose a tracker that addresses the aforementioned problems and is capable of tracking hundreds of people efficiently. We formulate online crowd tracking as Binary Quadratic Programing. Our formulation employs target's individual information in the form of appearance and motion as well as contextual cues in the form of neighborhood motion, spatial proximity and grouping, and solves detection and data association simultaneously. In order to solve the proposed quadratic optimization efficiently, where state-of art commercial quadratic programing solvers fail to find the solution in a reasonable amount of time, we propose to use the most recent version of the Modified Frank Wolfe algorithm, which takes advantage of SWAP-steps to speed up the optimization. We show that the proposed formulation can track hundreds of targets efficiently and improves state-of-art results by significant margins on eleven challenging high density crowd sequences.
Our binary quadratic formulation consists of three linear terms and two quadratic term. Two linear terms capture the properties of the individual tracks. The third linear term as well as the quadratic terms are responsible for modeling interactions between the targets. We show that the proposed quadratic objective function could be solved efficiently using an accelerated version of modified Frank-Wolfe algorithm which takes an advantage of steps for further speed up @cite_29 .
{ "cite_N": [ "@cite_29" ], "mid": [ "2964052549" ], "abstract": [ "Recently, there has been a renewed interest in the machine learning community for variants of a sparse greedy approximation procedure for concave optimization known as the Frank-Wolfe (FW) method. In particular, this procedure has been successfully applied to train large-scale instances of non-linear Support Vector Machines (SVMs). Specializing FW to SVM training has allowed to obtain efficient algorithms, but also important theoretical results, including convergence analysis of training algorithms and new characterizations of model sparsity.In this paper, we present and analyze a novel variant of the FW method based on a new way to perform away steps, a classic strategy used to accelerate the convergence of the basic FW procedure. Our formulation and analysis is focused on a general concave maximization problem on the simplex. However, the specialization of our algorithm to quadratic forms is strongly related to some classic methods in computational geometry, namely the Gilbert and MDM algorithms.On the theoretical side, we demonstrate that the method matches the guarantees in terms of convergence rate and number of iterations obtained by using classic away steps. In particular, the method enjoys a linear rate of convergence, a result that has been recently proved for MDM on quadratic forms.On the practical side, we provide experiments on several classification datasets, and evaluate the results using statistical tests. Experiments show that our method is faster than the FW method with classic away steps, and works well even in the cases in which classic away steps slow down the algorithm. Furthermore, these improvements are obtained without sacrificing the predictive accuracy of the obtained SVM model." ] }
1603.09194
2335653173
Iterated applications of belief change operators are essential for different scenarios such as that of ontology evolution where new information is not presented at once but only in piecemeal fashion within a sequence. I discuss iterated applications of so called reinterpretation operators that trace conflicts between ontologies back to the ambiguous of symbols and that provide conflict resolution strategies with bridging axioms. The discussion centers on adaptations of the classical iteration postulates according to Darwiche and Pearl. The main result of the paper is that reinterpretation operators fulfill the postulates for sequences containing only atomic triggers. For complex triggers, a fulfillment is not guaranteed and indeed there are different reasons for the different postulates why they should not be fulfilled in the particular scenario of ontology revision with well developed ontologies.
The reinterpretation operators are constructed in a similar fashion as those by Delgrande and Schaub @cite_7 but differ in that they are defined not only for propositional logic but also for DLs (and FOLs). Moreover, I consider different stronger forms of bridging axioms than the implications of @cite_7 .
{ "cite_N": [ "@cite_7" ], "mid": [ "2028246200" ], "abstract": [ "This paper presents a general, consistency-based framework for expressing belief change. The framework has good formal properties while being well-suited for implementation. For belief revision, informally, in revising a knowledge base K by a sentence α, we begin with α and include as much of K as consistently possible. This is done by expressing K and α in disjoint languages, asserting that the languages agree on the truth values of corresponding atoms wherever consistently possible, and then re-expressing the result in the original language of K. There may be more than one way in which the languages of K and α can be so correlated: in choice revision, one such \"extension\" represents the revised state; alternately (skeptical) revision consists of the intersection of all such extensions. Contraction is similarly defined although, interestingly, it is not interdefinable with revision.The framework is general and flexible. For example, one could go on and express other belief change operations such as update and erasure, and the merging of knowledge bases. Further, the framework allows the incorporation of static and dynamic integrity constraints. The approach is well-suited for implementation: belief change can be equivalently expressed in terms of a finite knowledge base; and the scope of a belief change operation can be restricted to just those propositions common to the knowledge base and sentence for change. We give a high-level algorithm implementing the procedure, and an expression of the approach in Default Logic. Lastly, we briefly discuss two implementations of the approach." ] }
1603.09194
2335653173
Iterated applications of belief change operators are essential for different scenarios such as that of ontology evolution where new information is not presented at once but only in piecemeal fashion within a sequence. I discuss iterated applications of so called reinterpretation operators that trace conflicts between ontologies back to the ambiguous of symbols and that provide conflict resolution strategies with bridging axioms. The discussion centers on adaptations of the classical iteration postulates according to Darwiche and Pearl. The main result of the paper is that reinterpretation operators fulfill the postulates for sequences containing only atomic triggers. For complex triggers, a fulfillment is not guaranteed and indeed there are different reasons for the different postulates why they should not be fulfilled in the particular scenario of ontology revision with well developed ontologies.
Bridging axioms are special mappings that are used in the reinterpretation operator as auxiliary means to implement ontology revision. One may also consider mappings by themselves as the objects of revision @cite_8 @cite_15 . A particularly interesting case of mapping revision comes into play with mappings used in the ontology based data access paradigm @cite_0 . These mappings are meant to lift data from relational DBs to the ontology level thereby mapping between close world of data and the open world of ontologies. In this setting different forms of inconsistencies induced by the mappings can be defined (such as local vs. global inconsistency) and based on this mapping evolution ensuring (one form of consistency) be investigated @cite_16 .
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_16", "@cite_8" ], "mid": [ "1528446492", "2148216955", "", "1527752276" ], "abstract": [ "Ontologies provide a conceptualization of a domain of interest. Nowadays, they are typically represented in terms of Description Logics (DLs), and are seen as the key technology used to describe the semantics of information at various sites. The idea of using ontologies as a conceptual view over data repositories is becoming more and more popular, but for it to become widespread in standard applications, it is fundamental that the conceptual layer through which the underlying data layer is accessed does not introduce a significant overhead in dealing with the data. Based on these observations, in recent years a family of DLs, called DL-Lite, has been proposed, which is specifically tailored to capture basic ontology and conceptual data modeling languages, while keeping low complexity of reasoning and of answering complex queries, in particular when the complexity is measured w.r.t. the size of the data. In this article, we present a detailed account of the major results that have been achieved for the DL-Lite family. Specifically, we concentrate on @math , an expressive member of this family, present algorithms for reasoning and query answering over @math ontologies, and analyze their computational complexity. Such algorithms exploit the distinguishing feature of the logics in the DL-Lite family, namely that ontology reasoning and answering unions of conjunctive queries is first-order rewritable, i.e., it can be delegated to a relational database management system. We analyze also the effect of extending the logic with typical DL constructs, and show that for most such extensions, the nice computational properties of the DL-Lite family are lost. We address then the problem of accessing relational data sources through an ontology, and present a solution to the notorious impedance mismatch between the abstract objects in the ontology and the values appearing in data sources. The solution exploits suitable mappings that create the objects in the ontology from the appropriate values extracted from the data sources. Finally, we discuss the QUONTO system that implements all the above mentioned solutions and is wrapped by the DIG-QUONTO server, thus providing a standard DL reasoner for @math with extended functionality to access external data sources.", "Finding correct semantic correspondences between heterogeneous ontologies is one of the most challenging problems in the area of semantic web technologies. As manually constructing such mappings is not feasible in realistic scenarios, a number of automatic matching tools have been developed that propose mappings based on general heuristics. As these heuristics often produce incorrect results, a manual revision is inevitable in order to guarantee the quality of generated mappings. Experiences with benchmarking matching systems revealed that the manual revision of mappings is still a very difficult problem because it has to take the semantics of the ontologies as well as interactions between mappings into account. In this article, we propose methods for supporting human experts in the task of revising automatically created mappings. In particular, we present non-standard reasoning methods for detecting and propagating implications of expert decisions on the correctness of a mapping.", "", "Ontology matching is one of the key research topics in the field of the Semantic Web. There are many matching systems that generate mappings between different ontologies either automatically or semi-automatically. However, the mappings generated by these systems may be inconsistent with the ontologies. Several approaches have been proposed to deal with the inconsistencies between mappings and ontologies. This problem is often called a mapping revision problem, as the ontologies are assumed to be correct, whereas the mappings are repaired when resolving the inconsistencies. In this paper, we first propose a conflict-based mapping revision operator and show that it can be characterized by two logical postulates adapted from some existing postulates for belief base revision. We then provide an algorithm for iterative mapping revision by using an ontology revision operator and show that this algorithm defines a conflict-based mapping revision operator. Three concrete ontology revision operators are given to instantiate the iterative algorithm, which result in three different mapping revision algorithms. We implement these algorithms and provide some preliminary but interesting evaluation results." ] }
1603.09194
2335653173
Iterated applications of belief change operators are essential for different scenarios such as that of ontology evolution where new information is not presented at once but only in piecemeal fashion within a sequence. I discuss iterated applications of so called reinterpretation operators that trace conflicts between ontologies back to the ambiguous of symbols and that provide conflict resolution strategies with bridging axioms. The discussion centers on adaptations of the classical iteration postulates according to Darwiche and Pearl. The main result of the paper is that reinterpretation operators fulfill the postulates for sequences containing only atomic triggers. For complex triggers, a fulfillment is not guaranteed and indeed there are different reasons for the different postulates why they should not be fulfilled in the particular scenario of ontology revision with well developed ontologies.
In this paper I used reinterpretation operators as change operators on ontologies described in DLs. There are different other approaches that use the ideas of belief revision for different forms of ontology change such as ontology evolution over DL-Lite ontologies @cite_1 or ontology debugging @cite_20 . As the consequence operator over DLs do not fulfill all preconditions assumed by AGM @cite_13 one cannot directly transfer AGM constructions and ideas one-to-one to the DL setting as noted, e.g., by Flouris and colleagues @cite_11 and dealt in more depth for non-classical logics by Ribeiro @cite_19 . For the definition of the reinterpretation operators the constraint is not essential. Nonetheless, they lead to constraints in providing appropriate counter examples: namely ontologies expressible in the DL at hand.
{ "cite_N": [ "@cite_1", "@cite_19", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "2141529801", "1978847558", "2149420462", "2150683986", "" ], "abstract": [ "Evolution of Knowledge Bases (KBs) expressed in Description Logics (DLs) has gained a lot of attention lately. Recent studies on the topic have mostly focused on so-called model-based approaches (MBAs), where the evolution of a KB results in a set of models. For KBs expressed in tractable DLs, such as those of the DL-Lite family, which we consider here, it has been shown that one faces inexpressibility of evolution, i.e., the result of evolution of a DL-Lite KB in general cannot be expressed in DL-Lite, in other words, DL-Lite is not closed under evolution. What is still missing in these studies is a thorough understanding of various important aspects of the evolution problem for DL-Lite KBs: Which fragments of DL-Lite are closed under evolution? What causes the inexpressibility? Can one approximate evolution in DL-Lite, and if yes, how? This work provides some understanding of these issues for an important class of MBAs, which cover the cases of both update and revision. We describe what causes inexpressibility, and we propose techniques (based on what we call prototypes) that help to approximate evolution under the well-known approach by Winslett, which is inexpressible in DL-Lite. We also identify a fragment of DL-Lite closed under evolution, and for this fragment we provide polynomial-time algorithms to compute or approximate evolution results for various MBAs.", "In this article, we propose a belief revision approach for families of (non-classical) logics whose semantics are first-order axiomatisable. Given any such (non-classical) logic , the approach enables the definition of belief revision operators for , in terms of a belief revision operation satisfying the postulates for revision theory proposed by Alchourron, Gardenfors and Makinson (AGM revision, (1985)). The approach is illustrated by considering the modal logic K , Belnap's four-valued logic, and Łukasiewicz's many-valued logic. In addition, we present a general methodology to translate algebraic logics into classical logic. For the examples provided, we analyse in what circumstances the properties of the AGM revision are preserved and discuss the advantages of the approach from both theoretical and practical viewpoints.", "This paper extends earlier work by its authors on formal aspects of the processes of contracting a theory to eliminate a proposition and revising a theory to introduce a proposition. In the course of the earlier work, Gardenfors developed general postulates of a more or less equational nature for such processes, whilst Alchourron and Makinson studied the particular case of contraction functions that are maximal, in the sense of yielding a maximal subset of the theory (or alternatively, of one of its axiomatic bases), that fails to imply the proposition being eliminated. In the present paper, the authors study a broader class, including contraction functions that may be less than maximal. Specifically, they investigate “partial meet contraction functions”, which are defined to yield the intersection of some nonempty family of maximal subsets of the theory that fail to imply the proposition being eliminated. Basic properties of these functions are established: it is shown in particular that they satisfy the Gardenfors postulates, and moreover that they are sufficiently general to provide a representation theorem for those postulates. Some special classes of partial meet contraction functions, notably those that are “relational” and “transitively relational”, are studied in detail, and their connections with certain “supplementary postulates” of Gardenfors investigated, with a further representation theorem established.", "Belief Revision deals with the problem of adding new information to a knowledge base in a consistent way. Ontology Debugging, on the other hand, aims to find the axioms in a terminological knowledge base which caused the base to become inconsistent. In this article, we propose a belief revision approach in order to find and repair inconsistencies in ontologies represented in some description logic (DL). As the usual belief revision operators cannot be directly applied to DLs, we propose new operators that can be used with more general logics and show that, in particular, they can be applied to the logics underlying OWL-DL and Lite.", "" ] }
1603.09133
2330428333
We propose a new approximate factorization for solving linear systems with symmetric positive definite sparse matrices. In a nutshell the algorithm is to apply hierarchically block Gaussian elimination and additionally compress the fill-in. The systems that have efficient compression of the fill-in mostly arise from discretization of partial differential equations. We show that the resulting factorization can be used as an efficient preconditioner and compare the proposed approach with state-of-art direct and iterative solvers.
The main difference between CE-algorithm and the classical direct sparse LU-like solvers royalblue (e.g. block Gaussian elimination @cite_23 , multifrontal method @cite_18 and others @cite_19 @cite_20 @cite_29 @cite_16 ) is that in our algorithm fill-in growth is controlled while maintaining the accuracy, thanks to the additional compression procedure. This leads to big advantage in memory usage and probably makes CE-algorithm asymptotically faster, but this point requires additional analysis. Note that references @cite_18 and @cite_16 use compressed representations (i.e. they are not just using the graph structure of the matrix, as is the case for the other solvers).
{ "cite_N": [ "@cite_18", "@cite_29", "@cite_19", "@cite_23", "@cite_16", "@cite_20" ], "mid": [ "", "2089024363", "2051917325", "76288046", "1008564288", "2094542090" ], "abstract": [ "", "CHOLMOD is a set of routines for factorizing sparse symmetric positive definite matrices of the form A or AAT, updating downdating a sparse Cholesky factorization, solving linear systems, updating downdating the solution to the triangular system Lx = b, and many other sparse matrix functions for both symmetric and unsymmetric matrices. Its supernodal Cholesky factorization relies on LAPACK and the Level-3 BLAS, and obtains a substantial fraction of the peak performance of the BLAS. Both real and complex matrices are supported. CHOLMOD is written in ANSI ISO C, with both C and MATLABTM interfaces. It appears in MATLAB 7.2 as x = A when A is sparse symmetric positive definite, as well as in several other sparse matrix functions.", "An ANSI C code for sparse LU factorization is presented that combines a column pre-ordering strategy with a right-looking unsymmetric-pattern multifrontal numerical factorization. The pre-ordering and symbolic analysis phase computes an upper bound on fill-in, work, and memory usage during the subsequent numerical factorization. User-callable routines are provided for ordering and analyzing a sparse matrix, computing the numerical factorization, solving a system with the LU factors, transposing and permuting a sparse matrix, and converting between sparse matrix representations. The simple user interface shields the user from the details of the complex sparse factorization data structures by returning simple handles to opaque objects. Additional user-callable routines are provided for printing and extracting the contents of these opaque objects. An even simpler way to use the package is through its MATLAB interface. UMFPACK is incorporated as a built-in operator in MATLAB 6.5 as x = Abb when A is sparse and unsymmetric.", "Abstract Fast direct methods incorporating marching techniques are considered from the viewpoint of sparse Gaussian elimination. It is shown that the algorithms correspond to the nonstandard block backsolution of particular LU decompositions.", "Direct factorization methods for the solution of large, sparse linear systems that arise from PDE discretizations are robust, but typically show poor time and memory scalability for large systems. In this paper, we describe an efficient sparse, rank-structured Cholesky algorithm for solution of the positive definite linear system @math when @math comes from a discretized partial-differential equation. Our approach combines the efficient memory access patterns of conventional supernodal Cholesky algorithms with the memory efficiency of rank-structured direct solvers. For several test problems arising from PDE discretizations, our method takes less memory than standard sparse Cholesky solvers and less wall-clock time than standard preconditioned iterations.", "This paper presents a parallel sparse Cholesky factorization algorithm for shared-memory MIMD multiprocessors. The algorithm is particularly well suited for vector supercomputers with multiple processors, such as the Cray Y-MP. The new algorithm is a straightforward parallelization of the left-looking supernodal sparse Cholesky factorization algorithm. Like its sequential predecessor, it improves performance by reducing indirect addressing and memory traffic. Experimental results on a Cray Y-MP demonstrate the effectiveness of the new algorithm. On eight processors of a Cray Y-MP, the new routine performs the factorization at rates exceeding one Gflop for several test problems from the Harwell–Boeing sparse matrix collection." ] }
1603.09133
2330428333
We propose a new approximate factorization for solving linear systems with symmetric positive definite sparse matrices. In a nutshell the algorithm is to apply hierarchically block Gaussian elimination and additionally compress the fill-in. The systems that have efficient compression of the fill-in mostly arise from discretization of partial differential equations. We show that the resulting factorization can be used as an efficient preconditioner and compare the proposed approach with state-of-art direct and iterative solvers.
Recently, so-called have attracted a lot of attention. To name few references: @cite_2 @cite_24 @cite_4 @cite_25 @cite_9 . As a subclass of such solvers, @math direct solvers @cite_15 @cite_13 @cite_27 have been introduced. These solvers compute approximate sparse LU-like factorization of HSS-matrices. The simplicity of the structure in comparison to the general @math structure allows for a very efficient implementation, and despite the fact that these matrices are essentially 1D- @math matrices and optimal linear complexity is not possible for the matrices that are discretization of 2D 3D PDEs the running times can be quite impressive.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_24", "@cite_27", "@cite_2", "@cite_15", "@cite_13", "@cite_25" ], "mid": [ "", "2009272201", "1973786815", "2035247783", "2141719776", "", "1712535590", "2951856680" ], "abstract": [ "", "We describe an algorithm for the direct solution of systems of linear algebraic equations associated with the discretization of boundary integral equations with non-oscillatory kernels in two dimensions. The algorithm is ''fast'' in the sense that its asymptotic complexity is O(n), where n is the number of nodes in the discretization. Unlike previous fast techniques based on iterative solvers, the present algorithm directly constructs a compressed factorization of the inverse of the matrix; thus it is suitable for problems involving relatively ill-conditioned matrices, and is particularly efficient in situations involving multiple right hand sides. The performance of the scheme is illustrated with several numerical examples. rformance of the scheme is illustrated with several numerical examples. ples.", "We consider an algebraic representation that is useful for matrices with off-diagonal blocks of low numerical rank. A fast and stable solver for linear systems of equations in which the coefficient matrix has this representation is presented. We also present a fast algorithm to construct the hierarchically semiseparable representation in the general case.", "Abstract We describe an algorithm for the rapid direct solution of linear algebraic systems arising from the discretization of boundary integral equations of potential theory in two dimensions. The algorithm is combined with a scheme that adaptively rearranges the parameterization of the boundary in order to minimize the ranks of the off-diagonal blocks in the discretized operator, thus obviating the need for the user to supply a parameterization r of the boundary for which the distance ‖ r ( s ) − r ( t ) ‖ between two points on the boundary is related to their corresponding distance | s − t | in the parameter space. The algorithm has an asymptotic complexity of O ( N log 2 N ) , where N is the number of nodes in the discretization. The performance of the algorithm is illustrated with several numerical examples.", "In this paper we present a fast direct solver for certain classes of dense structured linear systems that works by first converting the given dense system to a larger system of block sparse equations and then uses standard sparse direct solvers. The kind of matrix structures that we consider are induced by numerical low rank in the off-diagonal blocks of the matrix and are related to the structures exploited by the fast multipole method (FMM) of Greengard and Rokhlin. The special structure that we exploit in this paper is captured by what we term the hierarchically semiseparable (HSS) representation of a matrix. Numerical experiments indicate that the method is probably backward stable.", "", "This article presents a fast solver for the dense \"frontal\" matrices that arise from the multifrontal sparse elimination process of 3D elliptic PDEs. The solver relies on the fact that these matrices can be efficiently represented as a hierarchically off-diagonal low-rank (HODLR) matrix. To construct the low-rank approximation of the off-diagonal blocks, we propose a new pseudo-skeleton scheme, the boundary distance low-rank approximation, that picks rows and columns based on the location of their corresponding vertices in the sparse matrix graph. We compare this new low-rank approximation method to the adaptive cross approximation (ACA) algorithm and show that it achieves better speedup specially for unstructured meshes. Using the HODLR direct solver as a preconditioner (with a low tolerance) to the GMRES iterative scheme, we can reach machine accuracy much faster than a conventional LU solver. Numerical benchmarks are provided for frontal matrices arising from 3D finite element problems corresponding to a wide range of applications.", "We present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable representations (HSS). Such matrices appear in many applications, e.g., finite element methods, boundary element methods, etc. Exploiting this structure allows for fast solution of linear systems and or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, relies on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. This work is part of a more global effort, the STRUMPACK (STRUctured Matrices PACKage) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver." ] }
1603.08869
2333296430
Hierarchical Reinforcement Learning (HRL) exploits temporal abstraction to solve large Markov Decision Processes (MDP) and provide transferable subtask policies. In this paper, we introduce an off-policy HRL algorithm: Hierarchical Q-value Iteration (HQI). We show that it is possible to effectively learn recursive optimal policies for any valid hierarchical decomposition of the original MDP, given a fixed dataset collected from a flat stochastic behavioral policy. We first formally prove the convergence of the algorithm for tabular MDP. Then our experiments on the Taxi domain show that HQI converges faster than a flat Q-value Iteration and enjoys easy state abstraction. Also, we demonstrate that our algorithm is able to learn optimal policies for different hierarchical structures from the same fixed dataset, which enables model comparison without recollecting data.
All of the above work assumes that the agent can interact with the world while learning. However, in real-world applications that needs HRL, it is usually very expensive to collect data and terrible failures are not allowed on operation. This forbids the usage of online learning algorithms that could potentially preform horribly in the early learning stage. To our best knowledge, there is little prior work @cite_11 in developing batch learning algorithms that allow a hierarchical SMDP to be trained from an existing data set collected from a stochastic behavior policy. We believe that such algorithms are valuable for applying HRL in complex practical domains.
{ "cite_N": [ "@cite_11" ], "mid": [ "1534480106" ], "abstract": [ "In experimenting with off-policy temporal difference (TD) methods in hierarchical reinforcement learning (HRL) systems, we have observed unwanted on-policy learning under reproducible conditions. Here we present modifications to several TD methods that prevent unintentional on-policy learning from occurring. These modifications create a tension between exploration and learning. Traditional TD methods require commitment to finishing subtasks without exploration in order to update Q-values for early actions with high probability. One-step intra-option learning and temporal second difference traces (TSDT) do not suffer from this limitation. We demonstrate that our HRL system is efficient without commitment to completion of subtasks in a cliff-walking domain, contrary to a widespread claim in the literature that it is critical for efficiency of learning. Furthermore, decreasing commitment as exploration progresses is shown to improve both online performance and the resultant policy in the taxicab domain, opening a new avenue for research into when it is more beneficial to continue with the current subtask or to replan." ] }
1603.09000
2317297514
Multiple hypothesis testing is a core problem in statistical inference and arises in almost every scientific field. Given a set of null hypotheses @math , Benjamini and Hochberg introduced the false discovery rate (FDR), which is the expected proportion of false positives among rejected null hypotheses, and proposed a testing procedure that controls FDR below a pre-assigned significance level. Nowadays FDR is the criterion of choice for large scale multiple hypothesis testing. In this paper we consider the problem of controlling FDR in an "online manner". Concretely, we consider an ordered --possibly infinite-- sequence of null hypotheses @math where, at each step @math , the statistician must decide whether to reject hypothesis @math having access only to the previous decisions. This model was introduced by Foster and Stine. We study a class of "generalized alpha-investing" procedures and prove that any rule in this class controls online FDR, provided @math -values corresponding to true nulls are independent from the other @math -values. (Earlier work only established mFDR control.) Next, we obtain conditions under which generalized alpha-investing controls FDR in the presence of general @math -values dependencies. Finally, we develop a modified set of procedures that also allow to control the false discovery exceedance (the tail of the proportion of false discoveries). Numerical simulations and analytical results indicate that online procedures do not incur a large loss in statistical power with respect to offline approaches, such as Benjamini-Hochberg.
Building upon alpha-investing procedures, @cite_11 develops VIF, a method for feature selection in large regression problems. VIF is accurate and computationally very efficient; it uses a one-pass search over the pool of features and applies alpha-investing to test each feature for adding to the model. VIF regression avoids overfitting due to the property that alpha-investing controls @math . Similarly, one can incorporate @math in VIF regression to perform fast online feature selection and provably avoid overfitting.
{ "cite_N": [ "@cite_11" ], "mid": [ "2150859718" ], "abstract": [ "We propose a fast and accurate algorithm, VIF regression, for doing feature selection in large regression problems. VIF regression is extremely fast; it uses a one-pass search over the predictors and a computationally efficient method of testing each potential predictor for addition to the model. VIF regression provably avoids model overfitting, controlling the marginal false discovery rate. Numerical results show that it is much faster than any other published algorithm for regression with feature selection and is as accurate as the best of the slower algorithms." ] }
1603.09000
2317297514
Multiple hypothesis testing is a core problem in statistical inference and arises in almost every scientific field. Given a set of null hypotheses @math , Benjamini and Hochberg introduced the false discovery rate (FDR), which is the expected proportion of false positives among rejected null hypotheses, and proposed a testing procedure that controls FDR below a pre-assigned significance level. Nowadays FDR is the criterion of choice for large scale multiple hypothesis testing. In this paper we consider the problem of controlling FDR in an "online manner". Concretely, we consider an ordered --possibly infinite-- sequence of null hypotheses @math where, at each step @math , the statistician must decide whether to reject hypothesis @math having access only to the previous decisions. This model was introduced by Foster and Stine. We study a class of "generalized alpha-investing" procedures and prove that any rule in this class controls online FDR, provided @math -values corresponding to true nulls are independent from the other @math -values. (Earlier work only established mFDR control.) Next, we obtain conditions under which generalized alpha-investing controls FDR in the presence of general @math -values dependencies. Finally, we develop a modified set of procedures that also allow to control the false discovery exceedance (the tail of the proportion of false discoveries). Numerical simulations and analytical results indicate that online procedures do not incur a large loss in statistical power with respect to offline approaches, such as Benjamini-Hochberg.
There has been significant interest over the last two years in developing hypothesis testing procedures for high-dimensional regression, especially in conjunction with sparsity-seeking methods. Procedures for computing @math -values of low-dimensional coordinates were developed in @cite_0 @cite_20 @cite_7 @cite_27 @cite_17 . Sequential and selective inference methods were proposed in @cite_22 @cite_1 @cite_29 . Methods to control FDR were put forward in @cite_37 @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_22", "@cite_7", "@cite_29", "@cite_1", "@cite_0", "@cite_27", "@cite_20", "@cite_17" ], "mid": [ "1916786071", "2109177042", "", "2128235479", "2949267852", "2099932489", "", "", "", "2042542290" ], "abstract": [ "We introduce a new estimator for the vector of coecients in the linear model y = X +z, where X has dimensions n p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to", "In many fields of science, we observe a response variable together with a large number of potential explanatory variables, and would like to be able to discover which variables are truly associated with the response. At the same time, we need to know that the false discovery rate (FDR) - the expected fraction of false discoveries among all discoveries - is not too high, in order to assure the scientist that most of the discoveries are indeed true and replicable. This paper introduces the knockoff filter, a new variable selection procedure controlling the FDR in the statistical linear model whenever there are at least as many observations as variables. This method achieves exact FDR control in finite sample settings no matter the design or covariates, the number of variables in the model, or the amplitudes of the unknown regression coefficients, and does not require any knowledge of the noise level. As the name suggests, the method operates by manufacturing knockoff variables that are cheap - their construction does not require any new data - and are designed to mimic the correlation structure found within the existing variables, in a way that allows for accurate FDR control, beyond what is possible with permutation-based methods. The method of knockoffs is very general and flexible, and can work with a broad class of test statistics. We test the method in combination with statistics from the Lasso for sparse regression, and obtain empirical results showing that the resulting method has far more power than existing selection rules when the proportion of null variables is high.", "", "Fitting high-dimensional statistical models often requires the use of non-linear parameter estimation procedures. As a consequence, it is generally impossible to obtain an exact characterization of the probability distribution of the parameter estimates. This in turn implies that it is extremely challenging to quantify the uncertainty associated with a certain parameter estimate. Concretely, no commonly accepted procedure exists for computing classical measures of uncertainty and statistical significance as confidence intervals or p- values for these models. We consider here high-dimensional linear regression problem, and propose an efficient algorithm for constructing confidence intervals and p-values. The resulting confidence intervals have nearly optimal size. When testing for the null hypothesis that a certain parameter is vanishing, our method has nearly optimal power. Our approach is based on constructing a 'de-biased' version of regularized M-estimators. The new construction improves over recent work in the field in that it does not assume a special structure on the design matrix. We test our method on synthetic data and a high-throughput genomic data set about riboflavin production rate, made publicly available by (2014).", "We propose new inference tools for forward stepwise regression, least angle regression, and the lasso. Assuming a Gaussian model for the observation vector y, we first describe a general scheme to perform valid inference after any selection event that can be characterized as y falling into a polyhedral set. This framework allows us to derive conditional (post-selection) hypothesis tests at any step of forward stepwise or least angle regression, or any step along the lasso regularization path, because, as it turns out, selection events for these procedures can be expressed as polyhedral constraints on y. The p-values associated with these tests are exactly uniform under the null distribution, in finite samples, yielding exact type I error control. The tests can also be inverted to produce confidence intervals for appropriate underlying regression parameters. The R package \"selectiveInference\", freely available on the CRAN repository, implements the new inference tools described in this paper.", "To perform inference after model selection, we propose controlling the selective type I error; i.e., the error rate of a test given that it was performed. By doing so, we recover long-run frequency properties among selected hypotheses analogous to those that apply in the classical (non-adaptive) context. Our proposal is closely related to data splitting and has a similar intuitive justification, but is more powerful. Exploiting the classical theory of Lehmann", "", "", "", "We consider the problem of fitting the parameters of a high-dimensional linear regression model. In the regime where the number of parameters @math is comparable to or exceeds the sample size @math , a successful approach uses an @math -penalized least squares estimator, known as Lasso. Unfortunately, unlike for linear estimators (e.g., ordinary least squares), no well-established method exists to compute confidence intervals or p-values on the basis of the Lasso estimator. Very recently, a line of work javanmard2013hypothesis, confidenceJM, GBR-hypothesis has addressed this problem by constructing a debiased version of the Lasso estimator. In this paper, we study this approach for random design model, under the assumption that a good estimator exists for the precision matrix of the design. Our analysis improves over the state of the art in that it establishes nearly optimal testing power if the sample size @math asymptotically dominates @math , with @math being the sparsity level (number of non-zero coefficients). Earlier work obtains provable guarantees only for much larger sample size, namely it requires @math to asymptotically dominate @math . In particular, for random designs with a sparse precision matrix we show that an estimator thereof having the required properties can be computed efficiently. Finally, we evaluate this approach on synthetic data and compare it with earlier proposals." ] }
1603.09000
2317297514
Multiple hypothesis testing is a core problem in statistical inference and arises in almost every scientific field. Given a set of null hypotheses @math , Benjamini and Hochberg introduced the false discovery rate (FDR), which is the expected proportion of false positives among rejected null hypotheses, and proposed a testing procedure that controls FDR below a pre-assigned significance level. Nowadays FDR is the criterion of choice for large scale multiple hypothesis testing. In this paper we consider the problem of controlling FDR in an "online manner". Concretely, we consider an ordered --possibly infinite-- sequence of null hypotheses @math where, at each step @math , the statistician must decide whether to reject hypothesis @math having access only to the previous decisions. This model was introduced by Foster and Stine. We study a class of "generalized alpha-investing" procedures and prove that any rule in this class controls online FDR, provided @math -values corresponding to true nulls are independent from the other @math -values. (Earlier work only established mFDR control.) Next, we obtain conditions under which generalized alpha-investing controls FDR in the presence of general @math -values dependencies. Finally, we develop a modified set of procedures that also allow to control the false discovery exceedance (the tail of the proportion of false discoveries). Numerical simulations and analytical results indicate that online procedures do not incur a large loss in statistical power with respect to offline approaches, such as Benjamini-Hochberg.
In particular, @cite_5 @cite_9 develop multiple hypothesis testing procedures for ordered tests. Note, however, that these approaches fall short of addressing the issues we consider, for several reasons: @math They are not online, since they reject the first @math null hypotheses, where @math depends on all the @math -values. @math They require knowledge of all past @math -values (not only discovery events) to compute the current score. @math Since they are constrained to reject all hypotheses before @math , and accept them after, they cannot achieve any discovery rate increasing with @math , let alone nearly linear in @math . For instance in the mixture model of Section , if the fraction of true non-null is @math , then the methods of @cite_5 @cite_9 achieves @math discoveries out of @math true non-null. In other words their power is of order @math in this simple case.
{ "cite_N": [ "@cite_5", "@cite_9" ], "mid": [ "1871418963", "1792016789" ], "abstract": [ "Summary We consider a multiple-hypothesis testing setting where the hypotheses are ordered and one is only permitted to reject an initial contiguous block of hypotheses. A rejection rule in this setting amounts to a procedure for choosing the stopping point k. This setting is inspired by the sequential nature of many model selection problems, where choosing a stopping point or a model is equivalent to rejecting all hypotheses up to that point and none thereafter. We propose two new testing procedures and prove that they control the false discovery rate in the ordered testing setting. We also show how the methods can be applied to model selection by using recent results on p-values in sequential model selection settings.", "ABSTRACTMultiple testing problems arising in modern scientific applications can involve simultaneously testing thousands or even millions of hypotheses, with relatively few true signals. In this article, we consider the multiple testing problem where prior information is available (for instance, from an earlier study under different experimental conditions), that can allow us to test the hypotheses as a ranked list to increase the number of discoveries. Given an ordered list of n hypotheses, the aim is to select a data-dependent cutoff k and declare the first k hypotheses to be statistically significant while bounding the false discovery rate (FDR). Generalizing several existing methods, we develop a family of “accumulation tests” to choose a cutoff k that adapts to the amount of signal at the top of the ranked list. We introduce a new method in this family, the HingeExp method, which offers higher power to detect true signals compared to existing techniques. Our theoretical results prove that these metho..." ] }
1603.08907
2950298192
In this paper, we show how to use audio to supervise the learning of active speaker detection in video. Voice Activity Detection (VAD) guides the learning of the vision-based classifier in a weakly supervised manner. The classifier uses spatio-temporal features to encode upper body motion - facial expressions and gesticulations associated with speaking. We further improve a generic model for active speaker detection by learning person specific models. Finally, we demonstrate the online adaptation of generic models learnt on one dataset, to previously unseen people in a new dataset, again using audio (VAD) for weak supervision. The use of temporal continuity overcomes the lack of clean training data. We are the first to present an active speaker detection system that learns on one audio-visual dataset and automatically adapts to speakers in a new dataset. This work can be seen as an example of how the availability of multi-modal data allows us to learn a model without the need for supervision, by transferring knowledge from one modality to another.
There has been some work on person specific facial expression recognition and transferring generic to specific models for improving classification performance @cite_9 @cite_16 @cite_5 . @cite_9 show that facial expression recognition results improve when using person specific classifiers. They use an Inductive Transfer Learning (ITL) approach, where they learn a source classifier, which is a collection of weak learners in a boosting framework. Subsequently a subset of these are used for training the target classifier with a small number of labeled target samples.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_16" ], "mid": [ "2090213288", "", "2008635359" ], "abstract": [ "The way in which human beings express emotions depends on their specific personality and cultural background. As a consequence, person independent facial expression classifiers usually fail to accurately recognize emotions which vary between different individuals. On the other hand, training a person-specific classifier for each new user is a time consuming activity which involves collecting hundreds of labeled samples. In this paper we present a personalization approach in which only unlabeled target-specific data are required. The method is based on our previous paper [20] in which a regression framework is proposed to learn the relation between the user's specific sample distribution and the parameters of her his classifier. Once this relation is learned, a target classifier can be constructed using only the new user's sample distribution to transfer the personalized parameters. The novelty of this paper with respect to [20] is the introduction of a new method to represent the source sample distribution based on using only the Support Vectors of the source classifiers. Moreover, we present here a simplified regression framework which achieves the same or even slightly superior experimental results with respect to [20] but it is much easier to reproduce.", "", "Automatic facial action unit (AFA) detection from video is a long-standing problem in facial expression analysis. Most approaches emphasize choices of features and classifiers. They neglect individual differences in target persons. People vary markedly in facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) and behavior. Individual differences can dramatically influence how well generic classifiers generalize to previously unseen persons. While a possible solution would be to train person-specific classifiers, that often is neither feasible nor theoretically compelling. The alternative that we propose is to personalize a generic classifier in an unsupervised manner (no additional labels for the test subjects are required). We introduce a transductive learning method, which we refer to Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific biases. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. To evaluate the effectiveness of STM, we compared STM to generic classifiers and to cross-domain learning methods in three major databases: CK+, GEMEP-FERA and RU-FACS. STM outperformed generic classifiers in all." ] }
1603.08907
2950298192
In this paper, we show how to use audio to supervise the learning of active speaker detection in video. Voice Activity Detection (VAD) guides the learning of the vision-based classifier in a weakly supervised manner. The classifier uses spatio-temporal features to encode upper body motion - facial expressions and gesticulations associated with speaking. We further improve a generic model for active speaker detection by learning person specific models. Finally, we demonstrate the online adaptation of generic models learnt on one dataset, to previously unseen people in a new dataset, again using audio (VAD) for weak supervision. The use of temporal continuity overcomes the lack of clean training data. We are the first to present an active speaker detection system that learns on one audio-visual dataset and automatically adapts to speakers in a new dataset. This work can be seen as an example of how the availability of multi-modal data allows us to learn a model without the need for supervision, by transferring knowledge from one modality to another.
@cite_16 propose a Selective Transfer Machine (STM) approach to re-weight the source samples so that they are closer to the target samples. The algorithm simultaneously learns the parameters of the classifier and the source sample weights that minimize the error between the source and target distributions. They thus personalize a generic classifier to individual, with the resulting personalized classifier improving on the generic classifier on facial action unit detection tasks. However, STM requires the storage of all source samples, with a higher memory requirement than storing just the source classifier, which could be the weights of an SVM.
{ "cite_N": [ "@cite_16" ], "mid": [ "2008635359" ], "abstract": [ "Automatic facial action unit (AFA) detection from video is a long-standing problem in facial expression analysis. Most approaches emphasize choices of features and classifiers. They neglect individual differences in target persons. People vary markedly in facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) and behavior. Individual differences can dramatically influence how well generic classifiers generalize to previously unseen persons. While a possible solution would be to train person-specific classifiers, that often is neither feasible nor theoretically compelling. The alternative that we propose is to personalize a generic classifier in an unsupervised manner (no additional labels for the test subjects are required). We introduce a transductive learning method, which we refer to Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific biases. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. To evaluate the effectiveness of STM, we compared STM to generic classifiers and to cross-domain learning methods in three major databases: CK+, GEMEP-FERA and RU-FACS. STM outperformed generic classifiers in all." ] }
1603.08907
2950298192
In this paper, we show how to use audio to supervise the learning of active speaker detection in video. Voice Activity Detection (VAD) guides the learning of the vision-based classifier in a weakly supervised manner. The classifier uses spatio-temporal features to encode upper body motion - facial expressions and gesticulations associated with speaking. We further improve a generic model for active speaker detection by learning person specific models. Finally, we demonstrate the online adaptation of generic models learnt on one dataset, to previously unseen people in a new dataset, again using audio (VAD) for weak supervision. The use of temporal continuity overcomes the lack of clean training data. We are the first to present an active speaker detection system that learns on one audio-visual dataset and automatically adapts to speakers in a new dataset. This work can be seen as an example of how the availability of multi-modal data allows us to learn a model without the need for supervision, by transferring knowledge from one modality to another.
@cite_5 demonstrate unsupervised adaptation of a generic classifier to a target classifier on single frame expression datasets. They learn a regression function between the shape'' or sample distribution of each user in the labelled source dataset and his her classifier (source weight vector @math in the SVM). Applying this function on the unlabelled sample distribution of the target user then gives them the target classifier (target weight vector @math ). They do not require to keep in memory all the samples from the source dataset and outperform the STM method of @cite_16 . However, their approach requires that the relative distribution of positive and negative samples in every user's data is relatively constant and can be learnt using the source users. However, this is not the case in our data. Additionally, we learn the generic source classifier using unlabelled data as well - so our process requires no human supervision from beginning to end.
{ "cite_N": [ "@cite_5", "@cite_16" ], "mid": [ "2090213288", "2008635359" ], "abstract": [ "The way in which human beings express emotions depends on their specific personality and cultural background. As a consequence, person independent facial expression classifiers usually fail to accurately recognize emotions which vary between different individuals. On the other hand, training a person-specific classifier for each new user is a time consuming activity which involves collecting hundreds of labeled samples. In this paper we present a personalization approach in which only unlabeled target-specific data are required. The method is based on our previous paper [20] in which a regression framework is proposed to learn the relation between the user's specific sample distribution and the parameters of her his classifier. Once this relation is learned, a target classifier can be constructed using only the new user's sample distribution to transfer the personalized parameters. The novelty of this paper with respect to [20] is the introduction of a new method to represent the source sample distribution based on using only the Support Vectors of the source classifiers. Moreover, we present here a simplified regression framework which achieves the same or even slightly superior experimental results with respect to [20] but it is much easier to reproduce.", "Automatic facial action unit (AFA) detection from video is a long-standing problem in facial expression analysis. Most approaches emphasize choices of features and classifiers. They neglect individual differences in target persons. People vary markedly in facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) and behavior. Individual differences can dramatically influence how well generic classifiers generalize to previously unseen persons. While a possible solution would be to train person-specific classifiers, that often is neither feasible nor theoretically compelling. The alternative that we propose is to personalize a generic classifier in an unsupervised manner (no additional labels for the test subjects are required). We introduce a transductive learning method, which we refer to Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific biases. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. To evaluate the effectiveness of STM, we compared STM to generic classifiers and to cross-domain learning methods in three major databases: CK+, GEMEP-FERA and RU-FACS. STM outperformed generic classifiers in all." ] }
1603.08907
2950298192
In this paper, we show how to use audio to supervise the learning of active speaker detection in video. Voice Activity Detection (VAD) guides the learning of the vision-based classifier in a weakly supervised manner. The classifier uses spatio-temporal features to encode upper body motion - facial expressions and gesticulations associated with speaking. We further improve a generic model for active speaker detection by learning person specific models. Finally, we demonstrate the online adaptation of generic models learnt on one dataset, to previously unseen people in a new dataset, again using audio (VAD) for weak supervision. The use of temporal continuity overcomes the lack of clean training data. We are the first to present an active speaker detection system that learns on one audio-visual dataset and automatically adapts to speakers in a new dataset. This work can be seen as an example of how the availability of multi-modal data allows us to learn a model without the need for supervision, by transferring knowledge from one modality to another.
is the incremental learning of a classifier with an increasing number of training samples as and when they become available. In our context, we adapt the generic source classifier to the person-specific target classifier with an increasing number of samples from the speaker. This is somewhat similar to the problem of Active Learning, where a new classifier is to be learnt with the minimum budget in terms of time spent in labelling training samples, and the task is one of selecting the most relevant samples to be used for training. @cite_17 demonstrate Active Transfer Learning, in that the selection of relevant training samples is done with the help of previously learnt classifiers on other datasets. Both @cite_17 and @cite_5 use the source classifiers as zero-shot priors, giving a baseline performance using only the target classifier, with classification performance gradually increasing with an increasing number of samples from the target dataset. We use this as our inspiration for our online learning problem, except again, our learning is without any manual supervision.
{ "cite_N": [ "@cite_5", "@cite_17" ], "mid": [ "2090213288", "2950349738" ], "abstract": [ "The way in which human beings express emotions depends on their specific personality and cultural background. As a consequence, person independent facial expression classifiers usually fail to accurately recognize emotions which vary between different individuals. On the other hand, training a person-specific classifier for each new user is a time consuming activity which involves collecting hundreds of labeled samples. In this paper we present a personalization approach in which only unlabeled target-specific data are required. The method is based on our previous paper [20] in which a regression framework is proposed to learn the relation between the user's specific sample distribution and the parameters of her his classifier. Once this relation is learned, a target classifier can be constructed using only the new user's sample distribution to transfer the personalized parameters. The novelty of this paper with respect to [20] is the introduction of a new method to represent the source sample distribution based on using only the Support Vectors of the source classifiers. Moreover, we present here a simplified regression framework which achieves the same or even slightly superior experimental results with respect to [20] but it is much easier to reproduce.", "How can we reuse existing knowledge, in the form of available datasets, when solving a new and apparently unrelated target task from a set of unlabeled data? In this work we make a first contribution to answer this question in the context of image classification. We frame this quest as an active learning problem and use zero-shot classifiers to guide the learning process by linking the new task to the existing classifiers. By revisiting the dual formulation of adaptive SVM, we reveal two basic conditions to choose greedily only the most relevant samples to be annotated. On this basis we propose an effective active learning algorithm which learns the best possible target classification model with minimum human labeling effort. Extensive experiments on two challenging datasets show the value of our approach compared to the state-of-the-art active learning methodologies, as well as its potential to reuse past datasets with minimal effort for future tasks." ] }
1603.09188
2331817973
We introduce a new task, visual sense disambiguation for verbs: given an image and a verb, assign the correct sense of the verb, i.e., the one that describes the action depicted in the image. Just as textual word sense disambiguation is useful for a wide range of NLP tasks, visual sense disambiguation can be useful for multimodal tasks such as image retrieval, image description, and text illustration. We introduce VerSe, a new dataset that augments existing multimodal datasets (COCO and TUHOI) with sense labels. We propose an unsupervised algorithm based on Lesk which performs visual sense disambiguation using textual, visual, or multimodal embeddings. We find that textual embeddings perform well when gold-standard textual annotations (object labels and image descriptions) are available, while multimodal embeddings perform well on unannotated images. We also verify our findings by using the textual and multimodal embeddings as features in a supervised setting and analyse the performance of visual sense disambiguation task. VerSe is made publicly available and can be downloaded at: this https URL
There is an extensive literature on word sense disambiguation for nouns, verbs, adjectives and adverbs. Most of these approaches rely on lexical databases or sense inventories such as WordNet @cite_27 or OntoNotes @cite_28 . Unsupervised WSD approaches often rely on distributional representations, computed over the target word and its context @cite_7 @cite_11 @cite_5 . Most supervised approaches use sense annotated corpora to extract linguistic features of the target word (context words, POS tags, collocation features), which are then fed into a classifier to disambiguate test data @cite_6 . Recently, features based on sense-specific semantic vectors learned using large corpora and a sense inventory such as WordNet have been shown to achieve state-of-the-art results for supervised WSD @cite_32 @cite_24 .
{ "cite_N": [ "@cite_7", "@cite_28", "@cite_32", "@cite_6", "@cite_24", "@cite_27", "@cite_5", "@cite_11" ], "mid": [ "2123489126", "2088911157", "2125786288", "2101293500", "2294979170", "2102381086", "1963620052", "2121147707" ], "abstract": [ "Most previous corpus-based algorithms disambiguate a word with a classifier trained from previous usages of the same word. Separate classifiers have to be trained for different words. We present an algorithm that uses the same knowledge sources to disambiguate different words. The algorithm does not require a sense-tagged corpus and exploits the fact that two different words are likely to have similar meanings if they occur in identical local contexts.", "We describe the OntoNotes methodology and its result, a large multilingual richly-annotated corpus constructed at 90 interannotator agreement. An initial portion (300K words of English newswire and 250K words of Chinese newswire) will be made available to the community during 2007.", "We present , a system to learn embeddings for synsets and lexemes. It is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The synset lexeme embeddings obtained live in the same vector space as the word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet as a lexical resource, but AutoExtend can be easily applied to other resources like Freebase. AutoExtend achieves state-of-the-art performance on word similarity and word sense disambiguation tasks.", "Word sense disambiguation (WSD) systems based on supervised learning achieved the best performance in SensEval and SemEval workshops. However, there are few publicly available open source WSD systems. This limits the use of WSD in other applications, especially for researchers whose research interests are not in WSD. In this paper, we present IMS, a supervised English all-words WSD system. The flexible framework of IMS allows users to integrate different preprocessing tools, additional features, and different classifiers. By default, we use linear support vector machines as the classifier with multiple knowledge-based features. In our implementation, IMS achieves state-of-the-art results on several SensEval and SemEval tasks.", "Words are polysemous. However, most approaches to representation learning for lexical semantics assign a single vector to every surface word type. Meanwhile, lexical ontologies such as WordNet provide a source of complementary knowledge to distributional information, including a word sense inventory. In this paper we propose two novel and general approaches for generating sense-specific word embeddings that are grounded in an ontology. The first applies graph smoothing as a postprocessing step to tease the vectors of different senses apart, and is applicable to any vector space model. The second adapts predictive maximum likelihood models that learn word embeddings with latent variables representing senses grounded in an specified ontology. Empirical results on lexical semantic tasks show that our approaches effectively captures information from both the ontology and distributional statistics. Moreover, in most cases our sense-specific models outperform other models we compare against.", "Standard alphabetical procedures for organizing lexical information put together words that are spelled alike and scatter words with similar or related meanings haphazardly through the list. Unfortunately, there is no obvious alternative, no other simple way for lexicographers to keep track of what has been done or for readers to find the word they are looking for. But a frequent objection to this solution is that finding things on an alphabetical list can be tedious and time-consuming. Many people who would like to refer to a dictionary decide not to bother with it because finding the information would interrupt their work and break their train of thought.", "We present an automatic method for senselabeling of text in an unsupervised manner. The method makes use of distributionally similar words to derive an automatically labeled training set, which is then used to train a standard supervised classifier for distinguishing word senses. Experimental results on the Senseval-2 and Senseval-3 datasets show that our approach yields significant improvements over state-of-the-art unsupervised methods, and is competitive with supervised ones, while eliminating the annotation cost.", "In word sense disambiguation (WSD), the heuristic of choosing the most common sense is extremely powerful because the distribution of the senses of a word is often skewed. The problem with using the predominant, or first sense heuristic, aside from the fact that it does not take surrounding context into account, is that it assumes some quantity of hand-tagged data. Whilst there are a few hand-tagged corpora available for some languages, one would expect the frequency distribution of the senses of words, particularly topical words, to depend on the genre and domain of the text under consideration. We present work on the use of a thesaurus acquired from raw textual corpora and the WordNet similarity package to find predominant noun senses automatically. The acquired predominant senses give a precision of 64 on the nouns of the SENSEVAL-2 English all-words task. This is a very promising result given that our method does not require any hand-tagged text, such as SemCor. Furthermore, we demonstrate that our method discovers appropriate predominant senses for words from two domain-specific corpora." ] }
1603.08885
2316818965
In this paper, we analyze a shared access network with a fixed primary node and randomly distributed secondary nodes whose distribution follows a Poisson point process (PPP). The secondaries use a random access protocol allowing them to access the channel with probabilities that depend on the queue size of the primary. Assuming a system with multipacket reception (MPR) receivers having bursty packet arrivals at the primary and saturation at the secondaries, our protocol can be tuned to alleviate congestion at the primary. We study the throughput of the secondary network and the primary average delay, as well as the impact of the secondary node access probability and transmit power. We formulate an optimization problem to maximize the throughput of the secondary network under delay constraints for the primary node, which in the case that no congestion control is performed has a closed form expression providing the optimal access probability. Our numerical results illustrate the impact of network operating parameters on the performance of the proposed priority-based shared access protocol.
@cite_26 , we analyzed the throughput of the secondary network when MPR capability is enabled in a cognitive network with congestion control on the primary user. Using the collision channel scenario, throughput optimization with deadline constraints on a single secondary user accessing a multi-channel system is studied in @cite_14 . The optimal stopping rule and power control strategy are provided in terms of closed-form expressions. In @cite_20 the joint scheduling and power control is considered in order to minimize the sum average secondary delay subject to interference constraints at the primary user. However, prior work has not studied the random access protocol design which takes into account both the throughput of the secondary network and the delay of the primary one.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_20" ], "mid": [ "1608479615", "2146986084", "2258439892" ], "abstract": [ "In a cognitive radio scenario, we consider a single secondary user (SU) accessing a multichannel system. The SU senses the channels sequentially to detect if a primary user (PU) is occupying the channels and stops its search to access a channel if it offers a significantly high throughput. The optimal stopping rule and power control problem is considered. The problem is formulated as an SU's throughput-maximization problem under power, interference, and packet delay constraints. We first show the effect of the optimal stopping rule on packet delay and then solve this optimization problem for both the overlay system, where the SU transmits only at the spectrum holes, and the underlay system, where tolerable interference (or tolerable collision probability) is allowed. We provide closed-form expressions for the optimal stopping rule and show that the optimal power control strategy for this multichannel problem is a modified waterfilling approach. We extend the work to a multi-SU scenario and show that when the number of SUs is large, the complexity of the solution becomes smaller than that of the single-SU case. We discuss the application of this problem in typical networks where packets simultaneously arrive and have the same departure deadline. We further propose an online adaptation policy to the optimal stopping rule that meets the packets' hard-deadline constraint and, at the same time, gives higher throughput than the offline policy.", "In this paper we analyze a cognitive radio network with one primary and one secondary transmitter, in which the primary transmitter has bursty arrivals while the secondary node is assumed to be saturated (i.e. always has a packet waiting to be transmitted). The secondary node transmits in a cognitive way such that it does not impede the performance of the primary node. We assume that the receivers have multipacket reception (MPR) capabilities and that the secondary node can take advantage of the MPR capability by transmitting simultaneously with the primary under certain conditions. We obtain analytical expressions for the stationary distribution of the primary node queue and we also provide conditions for its stability. Finally, we provide expressions for the aggregate throughput of the network as well as for the throughput at the secondary node.", "An uplink multisecondary user (SU) cognitive radio system having average delay constraints as well as an interference constraint to the primary user (PU) is considered. If the interference channels between the SUs and the PU are statistically heterogeneous due to the different physical locations of the different SUs, the SUs will experience different delay performances. This is because SUs located closer to the PU transmit with lower power levels. Two dynamic scheduling-and-power-allocation policies that can provide the required average delay guarantees to all SUs irrespective of their locations are proposed. The first policy solves the problem when the interference constraint is an instantaneous one, while the second is for problems with long-term average interference constraints. We show that although the average interference problem is an extension to the instantaneous interference one, the solution is totally different. The two policies, derived using the Lyapunov optimization technique, are shown to be asymptotically delay optimal while satisfying the delay and interference constraints. Our findings are supported by extensive system simulations and shown to outperform the existing policies as well as shown to be robust to channel estimation errors." ] }
1603.08735
2322753433
We evaluate the performance and usability of mouse-based, touch-based, and tangible interaction for manipulating objects in a 3D virtual environment. This comparison is a step toward a better understanding of the limitations and benefits of these existing interaction techniques, with the ultimate goal of facilitating the integration of different 3D data exploration environments into a single interaction continuum. For this purpose we analyze participants' performance in 3D manipulation using a docking task. We measured completion times, docking precision, as well as subjective criteria such as fatigue, workload, and preference. Our results show that the three input modalities provide similar levels of precision but require different interaction times. We also discuss our qualitative observations as well as people's preferences and put our findings into context of the practical application domain of 3D data analysis environments.
Much of past work has focused on the comparison of interaction techniques or devices. In many cases, academic studies compare novel technique(s) or device(s) to established ones. For instance, many studies were conducted to compare the advantages and limitations of mouse interaction compared to touch interactions for tasks as various as selection, pointing, exploration etc. ( , @cite_25 @cite_37 @cite_9 ). Our review of the literature, however, revealed a lack of studies that would analyze these modalities for 3D manipulation tasks---only few researchers actually conducted such analyses @cite_61 @cite_47 @cite_52 @cite_17 .
{ "cite_N": [ "@cite_61", "@cite_37", "@cite_9", "@cite_52", "@cite_47", "@cite_25", "@cite_17" ], "mid": [ "2153079896", "", "2167495023", "2100557470", "2085882361", "2102005779", "2166261239" ], "abstract": [ "This paper describes and evaluates the design of four virtual controllers for use in rotating three-dimensional objects using the mouse. Three of four of these controllers are \"new\" in that they extend traditional direct manipulation techniques to a 3-D environment. User performance is compared during simple and complex rotation tasks. The results indicate faster performance for complex rotations using the new continuous axes controllers compared to more traditional slider approaches. No significant differences in accuracy for complex rotations were found across the virtual controllers.A second study compared the best of these four virtual controllers (the Virtual Sphere) to a control device by Evans, Tanner and Wein. No significant differences either in time to complete rotation task or accuracy of performance were found. All but one subject indicated they preferred the Virtual Sphere because it seemed more \"natural\".", "", "User performance with a tabletop display was tested using touch-based and mouse-based interaction in a traditional pointing task. Dependent variables were throughput, error rate, and movement time. In a study with 12 participants, touch had a higher throughput with average of 5.53 bps compared to 3.83 bps for the mouse. Touch also had a lower movement time on average, with block means ranging from 403 ms to 1051 ms vs. 607 ms to 1323 ms with the mouse. Error rates were lower for the mouse at 2.1 , compared to 9.8 for touch. The high error rates using touch were attributed to problems in selecting small targets with the finger. It is argued that, overall, touch input is a preferred and efficient input technique for tabletop displays, but that more research is needed to improve touch selection of small targets.", "We present an experimental comparison of multi-touch and tangible user interfaces for basic interface actions. Twelve participants completed manipulation and acquisition tasks on an interactive surface in each of three conditions: tangible user interface; multi-touch; and mouse and puck. We found that interface control objects in the tangible condition were easiest to acquire and, once acquired, were easier more accurate to manipulate. Further qualitative analysis suggested that in the evaluated tasks tangibles offer greater adaptability of control and specifically highlighted a problem of exit error that can undermine fine-grained control in multi-touch interactions. We discuss the implications of these findings for interface design.", "We report results from a formal user study of interactive 3D rotation using the mouse-driven Virtual Sphere and Arcball techniques, as well as multidimensional input techniques based on magnetic orientation sensors. MultidimensionaI input is often assumed to allow users to work quickly, but at the cost of precision, due to the instability of the hand moving in the open air. We show that, at least for the orientation matching task used in this experiment, users can take advantage of the integrated degrees of freedom provided by multidimensional input without necessarily sacrificing precision: using multidimensional input, users completed the experimental task up to 36 faster without any statistically detectable loss of accuracy. We also report detailed observations of common usability problems when first encountering the techniques. Our observations suggest some design issues for 3D input devices. For example, the physical form-factors of the 3D input device significantly influenced user acceptance of otherwise identical input sensors. The device should afford some tactile cues, so the user can feel its orientation without looking at it. In the absence of such cues, some test users were unsure of how to use the device.", "We investigate the differences -- in terms of bothquantitative performance and subjective preference -- between direct-touch and mouse input for unimanual andbimanual tasks on tabletop displays. The results of twoexperiments show that for bimanual tasks performed ontabletops, users benefit from direct-touch input. However,our results also indicate that mouse input may be moreappropriate for a single user working on tabletop tasksrequiring only single-point interaction.", "We present the design and evaluation of FI3D, a direct-touch data exploration technique for 3D visualization spaces. The exploration of three-dimensional data is core to many tasks and domains involving scientific visualizations. Thus, effective data navigation techniques are essential to enable comprehension, understanding, and analysis of the information space. While evidence exists that touch can provide higher-bandwidth input, somesthetic information that is valuable when interacting with virtual worlds, and awareness when working in collaboration, scientific data exploration in 3D poses unique challenges to the development of effective data manipulations. We present a technique that provides touch interaction with 3D scientific data spaces in 7 DOF. This interaction does not require the presence of dedicated objects to constrain the mapping, a design decision important for many scientific datasets such as particle simulations in astronomy or physics. We report on an evaluation that compares the technique to conventional mouse-based interaction. Our results show that touch interaction is competitive in interaction speed for translation and integrated interaction, is easy to learn and use, and is preferred for exploration and wayfinding tasks. To further explore the applicability of our basic technique for other types of scientific visualizations we present a second case study, adjusting the interaction to the illustrative visualization of fiber tracts of the brain and the manipulation of cutting planes in this context." ] }
1603.08735
2322753433
We evaluate the performance and usability of mouse-based, touch-based, and tangible interaction for manipulating objects in a 3D virtual environment. This comparison is a step toward a better understanding of the limitations and benefits of these existing interaction techniques, with the ultimate goal of facilitating the integration of different 3D data exploration environments into a single interaction continuum. For this purpose we analyze participants' performance in 3D manipulation using a docking task. We measured completion times, docking precision, as well as subjective criteria such as fatigue, workload, and preference. Our results show that the three input modalities provide similar levels of precision but require different interaction times. We also discuss our qualitative observations as well as people's preferences and put our findings into context of the practical application domain of 3D data analysis environments.
Among these, @cite_61 ---as early as in the 1980s---and later @cite_47 compared input techniques for 3D manipulation. Both studies, however, narrowly focused on rotation and did not take into account other parameters such as Euclidean distance to the target or usability. @cite_52 compared mouse, touch, and tangible interaction for a matching task which was constrained on a tabletop, thus constraining the interaction to two dimensions. They measured the time required to complete the task, the ease of use, and people's preference. @cite_17 , finally, compared mouse and touch interaction to validate their FI3D widget for 7DOF data navigation. In our work, in contrast, we aim to get a more holistic and general view of how the different input modalities affect the interaction with 3D shapes or scenes, ultimately to better understand how they can support the analysis of complex 3D datasets.
{ "cite_N": [ "@cite_61", "@cite_47", "@cite_52", "@cite_17" ], "mid": [ "2153079896", "2085882361", "2100557470", "2166261239" ], "abstract": [ "This paper describes and evaluates the design of four virtual controllers for use in rotating three-dimensional objects using the mouse. Three of four of these controllers are \"new\" in that they extend traditional direct manipulation techniques to a 3-D environment. User performance is compared during simple and complex rotation tasks. The results indicate faster performance for complex rotations using the new continuous axes controllers compared to more traditional slider approaches. No significant differences in accuracy for complex rotations were found across the virtual controllers.A second study compared the best of these four virtual controllers (the Virtual Sphere) to a control device by Evans, Tanner and Wein. No significant differences either in time to complete rotation task or accuracy of performance were found. All but one subject indicated they preferred the Virtual Sphere because it seemed more \"natural\".", "We report results from a formal user study of interactive 3D rotation using the mouse-driven Virtual Sphere and Arcball techniques, as well as multidimensional input techniques based on magnetic orientation sensors. MultidimensionaI input is often assumed to allow users to work quickly, but at the cost of precision, due to the instability of the hand moving in the open air. We show that, at least for the orientation matching task used in this experiment, users can take advantage of the integrated degrees of freedom provided by multidimensional input without necessarily sacrificing precision: using multidimensional input, users completed the experimental task up to 36 faster without any statistically detectable loss of accuracy. We also report detailed observations of common usability problems when first encountering the techniques. Our observations suggest some design issues for 3D input devices. For example, the physical form-factors of the 3D input device significantly influenced user acceptance of otherwise identical input sensors. The device should afford some tactile cues, so the user can feel its orientation without looking at it. In the absence of such cues, some test users were unsure of how to use the device.", "We present an experimental comparison of multi-touch and tangible user interfaces for basic interface actions. Twelve participants completed manipulation and acquisition tasks on an interactive surface in each of three conditions: tangible user interface; multi-touch; and mouse and puck. We found that interface control objects in the tangible condition were easiest to acquire and, once acquired, were easier more accurate to manipulate. Further qualitative analysis suggested that in the evaluated tasks tangibles offer greater adaptability of control and specifically highlighted a problem of exit error that can undermine fine-grained control in multi-touch interactions. We discuss the implications of these findings for interface design.", "We present the design and evaluation of FI3D, a direct-touch data exploration technique for 3D visualization spaces. The exploration of three-dimensional data is core to many tasks and domains involving scientific visualizations. Thus, effective data navigation techniques are essential to enable comprehension, understanding, and analysis of the information space. While evidence exists that touch can provide higher-bandwidth input, somesthetic information that is valuable when interacting with virtual worlds, and awareness when working in collaboration, scientific data exploration in 3D poses unique challenges to the development of effective data manipulations. We present a technique that provides touch interaction with 3D scientific data spaces in 7 DOF. This interaction does not require the presence of dedicated objects to constrain the mapping, a design decision important for many scientific datasets such as particle simulations in astronomy or physics. We report on an evaluation that compares the technique to conventional mouse-based interaction. Our results show that touch interaction is competitive in interaction speed for translation and integrated interaction, is easy to learn and use, and is preferred for exploration and wayfinding tasks. To further explore the applicability of our basic technique for other types of scientific visualizations we present a second case study, adjusting the interaction to the illustrative visualization of fiber tracts of the brain and the manipulation of cutting planes in this context." ] }
1603.08735
2322753433
We evaluate the performance and usability of mouse-based, touch-based, and tangible interaction for manipulating objects in a 3D virtual environment. This comparison is a step toward a better understanding of the limitations and benefits of these existing interaction techniques, with the ultimate goal of facilitating the integration of different 3D data exploration environments into a single interaction continuum. For this purpose we analyze participants' performance in 3D manipulation using a docking task. We measured completion times, docking precision, as well as subjective criteria such as fatigue, workload, and preference. Our results show that the three input modalities provide similar levels of precision but require different interaction times. We also discuss our qualitative observations as well as people's preferences and put our findings into context of the practical application domain of 3D data analysis environments.
Most of the comparative studies also focus on comparing either mouse and touch interaction techniques or tangible and touch (and many of these do that for 2D tasks). The literature indeed contains many papers comparing touch and mouse input for a whole variety of tasks and a whole variety of parameters: speed @cite_25 @cite_24 , error rate @cite_25 @cite_24 , minimum target size @cite_36 , etc. Similarly, much research has compared touch-based with tangible interaction for tasks as various as puzzle solving @cite_18 @cite_33 , layout-creation @cite_50 , photo-sorting @cite_18 , selecting pointing @cite_1 , and tracking @cite_0 . Most of the work comparing tangible interfaces to other interfaces builds on the assumption that physical interfaces, because they mimick the real world, are necessarily better. However, this assumption was rightfully questioned by @cite_18 . A 2DOF input device such as a mouse may, in fact, perform well in a 3D manipulation task due to its inherent precision or people's familiarity with it. To better understand advantages and challenges of the three mentioned input modalities, we thus compare them with each other in a single study.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_36", "@cite_1", "@cite_24", "@cite_0", "@cite_50", "@cite_25" ], "mid": [ "2113458551", "118694321", "2030026498", "2011151520", "2128629242", "1988234697", "2030582571", "2102005779" ], "abstract": [ "This work presents the results of a comparative study in which we investigate the ways manipulation of physical versus digital media are fundamentally different from one another. Participants carried out both a puzzle task and a photo sorting task in two different modes: in a physical 3-dimensional space and on a multi-touch, interactive tabletop in which the digital items resembled their physical counterparts in terms of appearance and behavior. By observing the interaction behaviors of 12 participants, we explore the main differences and discuss what this means for designing interactive surfaces which use aspects of the physical world as a design resource.", "", "Bare hand pointing on touch screens both benefits and suffers from the nature of direct input. This work explores techniques to overcome its limitations. Our goal is to design interaction tools allowing pixel level pointing in a fast and efficient manner. Based on several cycles of iterative design and testing, we propose two techniques: Cross-Keys that uses discrete taps on virtual keys integrated with a crosshair cursor, and an analog Precision-Handle that uses a leverage (gain) effect to amplify movement precision from the user's finger tip to the end cursor. We conducted a formal experiment with these two techniques, in addition to the previously known Zoom-Pointing and Take-Off as baseline anchors. Both subjective and performance measurements indicate that Precision-Handle and Cross-Keys complement existing techniques for touch screen interaction.", "This paper presents the design and evaluation of two interaction techniques used to navigate into large data collection displayed on a large output space while based on manipulations of a small physical artefact. The first technique exploits the spatial position of a digital camera and the second one uses its tactile screen. User experiments have been conducted to study and compare the both techniques, with regards to users' performance and satisfaction. Results establish that Tactile technique is more efficient than Tangible technique for easy pointing tasks while Tangible technique is better for hardest pointing tasks. In addition, users' feedback shows that they prefer to use the tangible camera, which requires fewer skills.", "Three studies were conducted comparing speed of performance, error rates and user preference ratings for three selection devices. The devices tested were a touchscreen, a touchscreen with stabilization (stabilization software filters and smooths raw data from hardware), and a mouse. The task was the selection of rectangular targets 1, 4, 16 and 32 pixels per side (0·4 × 0·6, 1·7 × 2·2, 6·9 × 9·0, 13·8 × 17·9 mm respectively). Touchscreen users were able to point at single pixel targets, thereby countering widespread expectations of poor touchscreen resolution. The results show no difference in performance between the mouse and touchscreen for targets ranging from 32 to 4 pixels per side. In addition, stabilization significantly reduced the error rates for the touchscreen when selecting small targets. These results imply that touchscreens, when properly used, have attractive advantages in selecting targets as small as 4 pixels per size (approximately one-quarter of the size of a single character). A variant of Fitts' Law is proposed to predict touchscreen pointing times. Ideas for future research are also presented.", "We explore the use of customizable tangible remote controllers for interacting with wall-size displays. Such controllers are especially suited to visual exploration tasks where users need to move to see details of complex visualizations. In addition, we conducted a controlled user study suggesting that tangibles make it easier for users to focus on the visual display while they interact. We explain how to build such controllers using off-the-shelf touch tablets and describe a sample application that supports multiple dynamic queries.", "Tabletop systems have become quite popular in recent years, during which there was considerable enthusiasm for the development of new interfaces. In this paper, we establish a comparison between touch and tangible interfaces. We set up an experiment involving several actions like translation and rotation. We recruited 40 participants to take part in a user study and we present our results with a discussion on the design of touch and tangible interfaces. Our contribution is an empirical study showing that overall, the tangible interface is much faster but under certain conditions, the touch interface could gain the upper hand.", "We investigate the differences -- in terms of bothquantitative performance and subjective preference -- between direct-touch and mouse input for unimanual andbimanual tasks on tabletop displays. The results of twoexperiments show that for bimanual tasks performed ontabletops, users benefit from direct-touch input. However,our results also indicate that mouse input may be moreappropriate for a single user working on tabletop tasksrequiring only single-point interaction." ] }
1603.08735
2322753433
We evaluate the performance and usability of mouse-based, touch-based, and tangible interaction for manipulating objects in a 3D virtual environment. This comparison is a step toward a better understanding of the limitations and benefits of these existing interaction techniques, with the ultimate goal of facilitating the integration of different 3D data exploration environments into a single interaction continuum. For this purpose we analyze participants' performance in 3D manipulation using a docking task. We measured completion times, docking precision, as well as subjective criteria such as fatigue, workload, and preference. Our results show that the three input modalities provide similar levels of precision but require different interaction times. We also discuss our qualitative observations as well as people's preferences and put our findings into context of the practical application domain of 3D data analysis environments.
Our study mainly builds on the work by @cite_47 and @cite_52 . @cite_47 conducted comparative 3D docking studies focused on rotation with four different techniques including a 3D ball (our equivalent is a tangible interface) and a mouse. We go beyond their approach in that we consider a full 6 ,DOF manipulation and evaluate more than time and accuracy. We go beyond 's @cite_52 approach in that we, while also comparing mouse, touch input, and tangible interfaces, use true 3D manipulation tasks---including for the tangible input device.
{ "cite_N": [ "@cite_47", "@cite_52" ], "mid": [ "2085882361", "2100557470" ], "abstract": [ "We report results from a formal user study of interactive 3D rotation using the mouse-driven Virtual Sphere and Arcball techniques, as well as multidimensional input techniques based on magnetic orientation sensors. MultidimensionaI input is often assumed to allow users to work quickly, but at the cost of precision, due to the instability of the hand moving in the open air. We show that, at least for the orientation matching task used in this experiment, users can take advantage of the integrated degrees of freedom provided by multidimensional input without necessarily sacrificing precision: using multidimensional input, users completed the experimental task up to 36 faster without any statistically detectable loss of accuracy. We also report detailed observations of common usability problems when first encountering the techniques. Our observations suggest some design issues for 3D input devices. For example, the physical form-factors of the 3D input device significantly influenced user acceptance of otherwise identical input sensors. The device should afford some tactile cues, so the user can feel its orientation without looking at it. In the absence of such cues, some test users were unsure of how to use the device.", "We present an experimental comparison of multi-touch and tangible user interfaces for basic interface actions. Twelve participants completed manipulation and acquisition tasks on an interactive surface in each of three conditions: tangible user interface; multi-touch; and mouse and puck. We found that interface control objects in the tangible condition were easiest to acquire and, once acquired, were easier more accurate to manipulate. Further qualitative analysis suggested that in the evaluated tasks tangibles offer greater adaptability of control and specifically highlighted a problem of exit error that can undermine fine-grained control in multi-touch interactions. We discuss the implications of these findings for interface design." ] }
1603.08592
2334314520
Tracking many vehicles in wide coverage aerial imagery is crucial for understanding events in a large field of view. Most approaches aim to associate detections from frame differencing into tracks. However, slow or stopped vehicles result in long-term missing detections and further cause tracking discontinuities. Relying merely on appearance clue to recover missing detections is difficult as targets are extremely small and in grayscale. In this paper, we address the limitations of detection association methods by coupling it with a local context tracker (LCT), which does not rely on motion detections. On one hand, our LCT learns neighboring spatial relation and tracks each target in consecutive frames using graph optimization. It takes the advantage of context constraints to avoid drifting to nearby targets. We generate hypotheses from sparse and dense flow efficiently to keep solutions tractable. On the other hand, we use detection association strategy to extract short tracks in batch processing. We explicitly handle merged detections by generating additional hypotheses from them. Our evaluation on wide area aerial imagery sequences shows significant improvement over state-of-the-art methods.
Multi-target tracking has been investigated in the computer vision society for many years. Joint probabilistic data association filter (JPDAF) @cite_13 and multiple hypothesis tracking (MHT) @cite_43 are two early successful approaches. However, the association step requires very high computational and memory cost in both methods; therefore, solutions are usually intractable in real-world applications. In practice, JPDAF has been incorporated with Kalman filter @cite_24 and particle filter @cite_30 to increase the efficiency. More recently, Rezatofighi @cite_39 improves the efficiency of JPDAF by obtaining m-best solution using integer linear programming and shows state-of-the-art result. MHT usually introduces tree pruning strategies @cite_38 @cite_45 @cite_3 to reduce the possible solution space. In recent years, network flow optimization has become popular in multi-target tracking approaches @cite_5 @cite_9 @cite_22 @cite_40 . Despite that these methods have shown promising results in their scenario, they all use one-to-one matching assumption. Therefore, they are not suitable for WAMI where split-and-merge motion detections often occur.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_22", "@cite_9", "@cite_3", "@cite_39", "@cite_24", "@cite_43", "@cite_40", "@cite_45", "@cite_5", "@cite_13" ], "mid": [ "2152669093", "", "2127084114", "2111644456", "2237765446", "2209193152", "1537250570", "2127923214", "2016135469", "1973421894", "2171243491", "2150440166" ], "abstract": [ "One of the goals in the field of mobile robotics is the development of mobile platforms which operate in populated environments and offer various services to humans. For many tasks it is highly desirable that a robot can determine the positions of the humans in its surrounding. In this paper we present a method for tracking multiple moving objects with a mobile robot. We introduce a sample-based variant of joint probabilistic data association filters to track features originating from individual objects and to solve the correspondence problem between the detected features and the filters. In contrast to standard methods, occlusions are handled explicitly during data association. The technique has been implemented and tested on a real robot. Experiments carried out in a typical office environment show that the method is able to track multiple persons even when the trajectories of two people are crossing each other.", "", "We propose a method for global multi-target tracking that can incorporate higher-order track smoothness constraints such as constant velocity. Our problem formulation readily lends itself to path estimation in a trellis graph, but unlike previous methods, each node in our network represents a candidate pair of matching observations between consecutive frames. Extra constraints on binary flow variables in the graph result in a problem that can no longer be solved by min-cost network flow. We therefore propose an iterative solution method that relaxes these extra constraints using Lagrangian relaxation, resulting in a series of problems that ARE solvable by min-cost flow, and that progressively improve towards a high-quality solution to our original optimization problem. We present experimental results showing that our method outperforms the standard network-flow formulation as well as other recent algorithms that attempt to incorporate higher-order smoothness constraints.", "We propose a network flow based optimization method for data association needed for multiple object tracking. The maximum-a-posteriori (MAP) data association problem is mapped into a cost-flow network with a non-overlap constraint on trajectories. The optimal data association is found by a min-cost flow algorithm in the network. The network is augmented to include an explicit occlusion model(EOM) to track with long-term inter-object occlusions. A solution to the EOM-based network is found by an iterative approach built upon the original algorithm. Initialization and termination of trajectories and potential false observations are modeled by the formulation intrinsically. The method is efficient and does not require hypotheses pruning. Performance is compared with previous results on two public pedestrian datasets to show its improvement.", "This paper revisits the classical multiple hypotheses tracking (MHT) algorithm in a tracking-by-detection framework. The success of MHT largely depends on the ability to maintain a small list of potential hypotheses, which can be facilitated with the accurate object detectors that are currently available. We demonstrate that a classical MHT implementation from the 90's can come surprisingly close to the performance of state-of-the-art methods on standard benchmark datasets. In order to further utilize the strength of MHT in exploiting higher-order information, we introduce a method for training online appearance models for each track hypothesis. We show that appearance models can be learned efficiently via a regularized least squares framework, requiring only a few extra operations for each hypothesis branch. We obtain state-of-the-art results on popular tracking-by-detection datasets such as PETS and the recent MOT challenge.", "In this paper, we revisit the joint probabilistic data association (JPDA) technique and propose a novel solution based on recent developments in finding the m-best solutions to an integer linear program. The key advantage of this approach is that it makes JPDA computationally tractable in applications with high target and or clutter density, such as spot tracking in fluorescence microscopy sequences and pedestrian tracking in surveillance footage. We also show that our JPDA algorithm embedded in a simple tracking framework is surprisingly competitive with state-of-the-art global tracking methods in these two applications, while needing considerably less processing time.", "This paper presents a novel approach for continuous detection and tracking of moving objects observed by multiple stationary cameras. We address the tracking problem by simultaneously modeling motion and appearance of the moving objects. The object’s appearance is represented using color distribution model invariant to 2D rigid and scale transformation. It provides an efficient blobs’ similarity measure for tracking. The motion models are obtained using a Kalman Filter (KF) process, which predicts the position of the moving object in 2D and 3D. The tracking is performed by the maximization of a joint probability model reflecting objects’ motion and appearance. The novelty of our approach consists in integrating multiple cues and multiple views in a JPDAF for tracking a large number of moving people with partial and total occlusions. We demonstrate the performances of the proposed method on a soccer game captured by two stationary cameras.", "An algorithm for tracking multiple targets in a cluttered enviroment is developed. The algorithm is capable of initiating tracks, accounting for false or missing reports, and processing sets of dependent reports. As each measurement is received, probabilities are calculated for the hypotheses that the measurement came from previously known targets in a target file, or from a new target, or that the measurement is false. Target states are estimated from each such data-association hypothesis using a Kalman filter. As more measurements are received, the probabilities of joint hypotheses are calculated recursively using all available information such as density of unknown targets, density of false targets, probability of detection, and location uncertainty. This branching technique allows correlation of a measurement with its source based on subsequent, as well as previous, data. To keep the number of hypotheses reasonable, unlikely hypotheses are eliminated and hypotheses with similar target estimates are combined. To minimize computational requirements, the entire set of targets and measurements is divided into clusters that are solved independently. In an illustrative example of aircraft tracking, the algorithm successfully tracks targets over a wide range of conditions.", "We analyze the computational problem of multi-object tracking in video sequences. We formulate the problem using a cost function that requires estimating the number of tracks, as well as their birth and death states. We show that the global solution can be obtained with a greedy algorithm that sequentially instantiates tracks using shortest path computations on a flow network. Greedy algorithms allow one to embed pre-processing steps, such as nonmax suppression, within the tracking algorithm. Furthermore, we give a near-optimal algorithm based on dynamic programming which runs in time linear in the number of objects and linear in the sequence length. Our algorithms are fast, simple, and scalable, allowing us to process dense input data. This results in state-of-the-art performance.", "In this paper, we present a method for simultaneously tracking thousands of targets in biological image sequences, which is of major importance in modern biology. The complexity and inherent randomness of the problem lead us to propose a unified probabilistic framework for tracking biological particles in microscope images. The framework includes realistic models of particle motion and existence and of fluorescence image features. For the track extraction process per se, the very cluttered conditions motivate the adoption of a multiframe approach that enforces tracking decision robustness to poor imaging conditions and to random target movements. We tackle the large-scale nature of the problem by adapting the multiple hypothesis tracking algorithm to the proposed framework, resulting in a method with a favorable tradeoff between the model complexity and the computational cost of the tracking procedure. When compared to the state-of-the-art tracking techniques for bioimaging, the proposed algorithm is shown to be the only method providing high-quality results despite the critically poor imaging conditions and the dense target presence. We thus demonstrate the benefits of advanced Bayesian tracking techniques for the accurate computational modeling of dynamical biological processes, which is promising for further developments in this domain.", "Multi-object tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a false-positive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization results in a convex problem. We take advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast. This new approach is far simpler formally and algorithmically than existing techniques and lets us demonstrate excellent performance in two very different contexts.", "The problem of associating data with targets in a cluttered multi-target environment is discussed and applied to passive sonar tracking. The probabilistic data association (PDA) method, which is based on computing the posterior probability of each candidate measurement found in a validation gate, assumes that only one real target is present and all other measurements are Poisson-distributed clutter. In this paper, a new theoretical result is presented: the joint probabilistic data association (JPDA) algorithm, in which joint posterior association probabilities are computed for multiple targets (or multiple discrete interfering sources) in Poisson clutter. The algorithm is applied to a passive sonar tracking problem with multiple sensors and targets, in which a target is not fully observable from a single sensor. Targets are modeled with four geographic states, two or more acoustic states, and realistic (i.e., low) probabilities of detection at each sample time. A simulation result is presented for two heavily interfering targets illustrating the dramatic tracking improvements obtained by estimating the targets' states using joint association probabilities." ] }
1603.08592
2334314520
Tracking many vehicles in wide coverage aerial imagery is crucial for understanding events in a large field of view. Most approaches aim to associate detections from frame differencing into tracks. However, slow or stopped vehicles result in long-term missing detections and further cause tracking discontinuities. Relying merely on appearance clue to recover missing detections is difficult as targets are extremely small and in grayscale. In this paper, we address the limitations of detection association methods by coupling it with a local context tracker (LCT), which does not rely on motion detections. On one hand, our LCT learns neighboring spatial relation and tracks each target in consecutive frames using graph optimization. It takes the advantage of context constraints to avoid drifting to nearby targets. We generate hypotheses from sparse and dense flow efficiently to keep solutions tractable. On the other hand, we use detection association strategy to extract short tracks in batch processing. We explicitly handle merged detections by generating additional hypotheses from them. Our evaluation on wide area aerial imagery sequences shows significant improvement over state-of-the-art methods.
Solving tracking problem with machine learning techniques has shown to be effective in boosting discriminative ability of appearance model for single target tracking @cite_28 @cite_42 @cite_10 and multi-target tracking @cite_17 @cite_16 @cite_33 . However, it is almost impossible to learn meaningful information in WAMI because the target size is extremely small and the imagery is typically in grayscale. Targets are often visually similar to each other and background patches.
{ "cite_N": [ "@cite_33", "@cite_28", "@cite_42", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "2225887246", "2139047213", "2167089254", "1825108226", "", "2132200263" ], "abstract": [ "Online Multi-Object Tracking (MOT) has wide applications in time-critical video analysis scenarios, such as robot navigation and autonomous driving. In tracking-by-detection, a major challenge of online MOT is how to robustly associate noisy object detections on a new video frame with previously tracked objects. In this work, we formulate the online MOT problem as decision making in Markov Decision Processes (MDPs), where the lifetime of an object is modeled with a MDP. Learning a similarity function for data association is equivalent to learning a policy for the MDP, and the policy learning is approached in a reinforcement learning fashion which benefits from both advantages of offline-learning and online-learning for data association. Moreover, our framework can naturally handle the birth death and appearance disappearance of targets by treating them as state transitions in the MDP while leveraging existing online single object tracking methods. We conduct experiments on the MOT Benchmark [24] to verify the effectiveness of our method.", "Visual tracking, in essence, deals with non-stationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object's appearance or surrounding illumination. One reason for such failures is that many algorithms employ fixed appearance models of the target. Such models are trained using only appearance data available before tracking begins, which in practice limits the range of appearances that are modeled, and ignores the large volume of information (such as shape changes or specific lighting conditions) that becomes available during tracking. In this paper, we present a tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. The model update, based on incremental algorithms for principal component analysis, includes two important features: a method for correctly updating the sample mean, and a forgetting factor to ensure less modeling power is expended fitting older observations. Both of these features contribute measurably to improving overall tracking performance. Numerous experiments demonstrate the effectiveness of the proposed tracking algorithm in indoor and outdoor environments where the target objects undergo large changes in pose, scale, and illumination.", "In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called “tracking by detection” have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance.", "We introduce an online learning approach to produce discriminative part-based appearance models (DPAMs) for tracking multiple humans in real scenes by incorporating association based and category free tracking methods. Detection responses are gradually associated into tracklets in multiple levels to produce final tracks. Unlike most previous multi-target tracking approaches which do not explicitly consider occlusions in appearance modeling, we introduce a part based model that explicitly finds unoccluded parts by occlusion reasoning in each frame, so that occluded parts are removed in appearance modeling. Then DPAMs for each tracklet is online learned to distinguish a tracklet with others as well as the background, and is further used in a conservative category free tracking approach to partially overcome the missed detection problem as well as to reduce difficulties in tracklet associations under long gaps. We evaluate our approach on three public data sets, and show significant improvements compared with state-of-art methods.", "", "We present an approach for online learning of discriminative appearance models for robust multi-target tracking in a crowded scene from a single camera. Although much progress has been made in developing methods for optimal data association, there has been comparatively less work on the appearance models, which are key elements for good performance. Many previous methods either use simple features such as color histograms, or focus on the discriminability between a target and the background which does not resolve ambiguities between the different targets. We propose an algorithm for learning a discriminative appearance model for different targets. Training samples are collected online from tracklets within a time sliding window based on some spatial-temporal constraints; this allows the models to adapt to target instances. Learning uses an Ad-aBoost algorithm that combines effective image descriptors and their corresponding similarity measurements. We term the learned models as OLDAMs. Our evaluations indicate that OLDAMs have significantly higher discrimination between different targets than conventional holistic color histograms, and when integrated into a hierarchical association framework, they help improve the tracking accuracy, particularly reducing the false alarms and identity switches." ] }