aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1606.06108
2475269242
Visual question answering (VQA) task not only bridges the gap between images and language, but also requires that specific contents within the image are understood as indicated by linguistic context of the question, in order to generate the accurate answers. Thus, it is critical to build an efficient embedding of images and texts. We implement DualNet, which fully takes advantage of discriminative power of both image and textual features by separately performing two operations. Building an ensemble of DualNet further boosts the performance. Contrary to common belief, our method proved effective in both real images and abstract scenes, in spite of significantly different properties of respective domain. Our method was able to outperform previous state-of-the-art methods in real images category even without explicitly employing attention mechanism, and also outperformed our own state-of-the-art method in abstract scenes category, which recently won the first place in VQA Challenge 2016.
@cite_15 proposed a number of improvements to dynamic memory network (DMN). Their proposed DMN+ model introduced a novel input module based on a two-level encoder with sentence reader and input fusion layer, and implemented memory based on gated recurrent units (GRU).
{ "cite_N": [ "@cite_15" ], "mid": [ "2293453011" ], "abstract": [ "Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering. One such architecture, the dynamic memory network (DMN), obtained high accuracy on a variety of language tasks. However, it was not shown whether the architecture achieves strong results for question answering when supporting facts are not marked during training or whether it could be applied to other modalities such as images. Based on an analysis of the DMN, we propose several improvements to its memory and input modules. Together with these changes we introduce a novel input module for images in order to be able to answer visual questions. Our new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the -10k text question-answering dataset without supporting fact supervision." ] }
1606.06108
2475269242
Visual question answering (VQA) task not only bridges the gap between images and language, but also requires that specific contents within the image are understood as indicated by linguistic context of the question, in order to generate the accurate answers. Thus, it is critical to build an efficient embedding of images and texts. We implement DualNet, which fully takes advantage of discriminative power of both image and textual features by separately performing two operations. Building an ensemble of DualNet further boosts the performance. Contrary to common belief, our method proved effective in both real images and abstract scenes, in spite of significantly different properties of respective domain. Our method was able to outperform previous state-of-the-art methods in real images category even without explicitly employing attention mechanism, and also outperformed our own state-of-the-art method in abstract scenes category, which recently won the first place in VQA Challenge 2016.
@cite_6 proposed focused dynamic attention (FDA) model, which exploits an object detector to determine regions of interest. LSTM is used to embed the region features and global features into common space.
{ "cite_N": [ "@cite_6" ], "mid": [ "2340874616" ], "abstract": [ "Visual Question and Answering (VQA) problems are attracting increasing interest from multiple research disciplines. Solving VQA problems requires techniques from both computer vision for understanding the visual contents of a presented image or video, as well as the ones from natural language processing for understanding semantics of the question and generating the answers. Regarding visual content modeling, most of existing VQA methods adopt the strategy of extracting global features from the image or video, which inevitably fails in capturing fine-grained information such as spatial configuration of multiple objects. Extracting features from auto-generated regions -- as some region-based image recognition methods do -- cannot essentially address this problem and may introduce some overwhelming irrelevant features with the question. In this work, we propose a novel Focused Dynamic Attention (FDA) model to provide better aligned image content representation with proposed questions. Being aware of the key words in the question, FDA employs off-the-shelf object detector to identify important regions and fuse the information from the regions and global features via an LSTM unit. Such question-driven representations are then combined with question representation and fed into a reasoning unit for generating the answers. Extensive evaluation on a large-scale benchmark dataset, VQA, clearly demonstrate the superior performance of FDA over well-established baselines." ] }
1606.06108
2475269242
Visual question answering (VQA) task not only bridges the gap between images and language, but also requires that specific contents within the image are understood as indicated by linguistic context of the question, in order to generate the accurate answers. Thus, it is critical to build an efficient embedding of images and texts. We implement DualNet, which fully takes advantage of discriminative power of both image and textual features by separately performing two operations. Building an ensemble of DualNet further boosts the performance. Contrary to common belief, our method proved effective in both real images and abstract scenes, in spite of significantly different properties of respective domain. Our method was able to outperform previous state-of-the-art methods in real images category even without explicitly employing attention mechanism, and also outperformed our own state-of-the-art method in abstract scenes category, which recently won the first place in VQA Challenge 2016.
@cite_16 proposed spatial memory network in which neuron activations of different spatial regions are stored in memory, and regions with high relevance are chosen depending on the question. The latter step was made possible by their novel spatial attention architecture designed to align words with patches.
{ "cite_N": [ "@cite_16" ], "mid": [ "2255577267" ], "abstract": [ "We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a question about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the information stored in memory. Our Spatial Memory Network stores neuron activations from different spatial regions of the image in its memory, and uses the question to choose relevant regions for computing the answer, a process of which constitutes a single \"hop\" in the network. We propose a novel spatial attention architecture that aligns words with image patches in the first hop, and obtain improved results by adding a second attention hop which considers the whole question to choose visual evidence based on the results of the first hop. To better understand the inference process learned by the network, we design synthetic questions that specifically require spatial inference and visualize the attention weights. We evaluate our model on two published visual question answering datasets, DAQUAR [1] and VQA [2], and obtain improved results compared to a strong deep baseline model (iBOWIMG) which concatenates image and question features to predict the answer [3]." ] }
1606.06108
2475269242
Visual question answering (VQA) task not only bridges the gap between images and language, but also requires that specific contents within the image are understood as indicated by linguistic context of the question, in order to generate the accurate answers. Thus, it is critical to build an efficient embedding of images and texts. We implement DualNet, which fully takes advantage of discriminative power of both image and textual features by separately performing two operations. Building an ensemble of DualNet further boosts the performance. Contrary to common belief, our method proved effective in both real images and abstract scenes, in spite of significantly different properties of respective domain. Our method was able to outperform previous state-of-the-art methods in real images category even without explicitly employing attention mechanism, and also outperformed our own state-of-the-art method in abstract scenes category, which recently won the first place in VQA Challenge 2016.
@cite_17 converted the questions to a tuple containing essential clues to the visual concept of the images. Each tuple (P, R, S) consists of a primary object (P), secondary object (S), and their relation (R). Mutual information was employed to determine which object corresponds to primary object and secondary object. They also augmented the dataset using crowd-sourcing in order to balance the biases in the dataset. Their visual features included histogram-like vectors for primary and secondary objects, as well as absolute and relative locations of the objects modeled by GMMs. We show that this model's performance is enhanced by addition of deep features, both holistically and regionally, and applying our DualNet further improves the performance.
{ "cite_N": [ "@cite_17" ], "mid": [ "2273038706" ], "abstract": [ "The complex compositional structure of language makes problems at the intersection of vision and language challenging. But language also provides a strong prior that can result in good superficial performance, without the underlying models truly understanding the visual content. This can hinder progress in pushing state of art in the computer vision aspects of multi-modal AI. In this paper, we address binary Visual Question Answering (VQA) on abstract scenes. We formulate this problem as visual verification of concepts inquired in the questions. Specifically, we convert the question to a tuple that concisely summarizes the visual concept to be detected in the image. If the concept can be found in the image, the answer to the question is \"yes\", and otherwise \"no\". Abstract scenes play two roles (1) They allow us to focus on the high-level semantics of the VQA task as opposed to the low-level recognition problems, and perhaps more importantly, (2) They provide us the modality to balance the dataset such that language priors are controlled, and the role of vision is essential. In particular, we collect fine-grained pairs of scenes for every question, such that the answer to the question is \"yes\" for one scene, and \"no\" for the other for the exact same question. Indeed, language priors alone do not perform better than chance on our balanced dataset. Moreover, our proposed approach matches the performance of a state-of-the-art VQA approach on the unbalanced dataset, and outperforms it on the balanced dataset." ] }
1606.05963
2950447414
It is hard to operate and debug systems like OpenStack that integrate many independently developed modules with multiple levels of abstractions. A major challenge is to navigate through the complex dependencies and relationships of the states in different modules or subsystems, to ensure the correctness and consistency of these states. We present a system that captures the runtime states and events from the entire OpenStack-Ceph stack, and automatically organizes these data into a graph that we call system operation state graph (SOSG).With SOSG we can use intuitive graph traversal techniques to solve problems like reasoning about the state of a virtual machine. Also, using graph-based anomaly detection, we can automatically discover hidden problems in OpenStack. We have a scalable implementation of SOSG, and evaluate the approach on a 125-node production OpenStack cluster, finding a number of interesting problems.
Anomaly detection has been widely used in system problem detection @cite_11 . @cite_18 takes advantage of Spark to perform large scale anomaly detection, and use it to detect VM performance problems. @cite_19 uses anomaly detection to find faults in a multi-tier web system with redundancy. These projects use a small number of homogeneous data source, while we mainly focus on analyzing states cross different system components.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_11" ], "mid": [ "", "2044019460", "2123967542" ], "abstract": [ "", "Anomaly detection refers to the identification of patterns in a dataset that do not conform to expected patterns. Depending on the domain, the non-conformant patterns are assigned various tags, e.g. anomalies, outliers, exceptions, malwares and so forth. Online anomaly detection aims to detect anomalies in data flowing in a streaming fashion. Such stream data is commonplace in today's cloud data centers that house a large array of virtual machines(VM) producing vast amounts of performance data in real-time. Sophisticated detection mechanism will likely entail collation of data from heterogeneous sources with diversified data format and semantics. Therefore, detection of performance anomaly in this context requires a distributed framework with high throughput and low latency. Apache Spark is one such framework that represents the bleeding-edge amongst its contemporaries. In this paper, we have taken up the challenge of anomaly detection in VMware based cloud data centers. We have employed a Chi-square based statistical anomaly detection technique in Spark. We have demonstrated how to take advantage of the high processing power of Spark to perform anomaly detection on heterogeneous data using statistical techniques. Our approach is optimally designed to cope with the heterogeneity of input data streams and the experiments we conducted testify to its efficacy in online anomaly detection.", "Online anomaly detection is an important step in data center management, requiring light-weight techniques that provide sufficient accuracy for subsequent diagnosis and management actions. This paper presents statistical techniques based on the Tukey and Relative Entropy statistics, and applies them to data collected from a production environment and to data captured from a testbed for multi-tier web applications running on server class machines. The proposed techniques are lightweight and improve over standard Gaussian assumptions in terms of performance." ] }
1606.05963
2950447414
It is hard to operate and debug systems like OpenStack that integrate many independently developed modules with multiple levels of abstractions. A major challenge is to navigate through the complex dependencies and relationships of the states in different modules or subsystems, to ensure the correctness and consistency of these states. We present a system that captures the runtime states and events from the entire OpenStack-Ceph stack, and automatically organizes these data into a graph that we call system operation state graph (SOSG).With SOSG we can use intuitive graph traversal techniques to solve problems like reasoning about the state of a virtual machine. Also, using graph-based anomaly detection, we can automatically discover hidden problems in OpenStack. We have a scalable implementation of SOSG, and evaluate the approach on a 125-node production OpenStack cluster, finding a number of interesting problems.
Anomaly detection methods. There are many anomaly detection methods for different types of data. @cite_27 @cite_16 provide techniques to simplify data from heterogeneous sources to improve anomaly detection result. Distance-based anomaly detection @cite_13 @cite_23 @cite_31 @cite_20 are special techniques allowing the anomaly to be described by a probability model. Graph anomaly detection is also a well studied topic @cite_3 @cite_35 .
{ "cite_N": [ "@cite_35", "@cite_3", "@cite_27", "@cite_23", "@cite_31", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "", "2032280284", "2014170720", "2131975293", "2111161380", "2115448240", "2061240327", "" ], "abstract": [ "", "Anomaly detection is an area that has received much attention in recent years. It has a wide variety of applications, including fraud detection and network intrusion detection. A good deal of research has been performed in this area, often using strings or attribute-value data as the medium from which anomalies are to be extracted. Little work, however, has focused on anomaly detection in graph-based data. In this paper, we introduce two techniques for graph-based anomaly detection. In addition, we introduce a new method for calculating the regularity of a graph, with applications to anomaly detection. We hypothesize that these methods will prove useful both for finding anomalies, and for determining the likelihood of successful anomaly detection within graph-based data. We provide experimental results using both real-world network intrusion data and artificially-created data.", "Intrusion Detection is an important component of the security of the Network, through the critical information of the network and host system, it can determine the user’s invasion of illegal and legal acts of the user misuse of resources, and make an adequate response. According to the problem, which machine learning anomaly detection effect is not ideal when the user behavior changes and a separate anomaly detection. Based on the Bayesian inference anomaly detection, applying the Bayesian inference of statistical methods to machine learning anomaly detection, this paper established a decision tree corresponding to the method. This method overcomes the satisfactory of the anomaly detection individual test, and improves the machine learning in the predictive ability of anomaly detection and anomaly detection efficiency.", "We present Resilient Distributed Datasets (RDDs), a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner. RDDs are motivated by two types of applications that current computing frameworks handle inefficiently: iterative algorithms and interactive data mining tools. In both cases, keeping data in memory can improve performance by an order of magnitude. To achieve fault tolerance efficiently, RDDs provide a restricted form of shared memory, based on coarse-grained transformations rather than fine-grained updates to shared state. However, we show that RDDs are expressive enough to capture a wide class of computations, including recent specialized programming models for iterative jobs, such as Pregel, and new applications that these models do not capture. We have implemented RDDs in a system called Spark, which we evaluate through a variety of user applications and benchmarks.", "We propose the use of commute distance, a random walk metric, to discover anomalies in network traffic data. The commute distance based anomaly detection approach has several advantages over Principal Component Analysis (PCA), which is the method of choice for this task: (i) It generalizes both distance and density based anomaly detection techniques while PCA is primarily distance-based (ii) It is agnostic about the underlying data distribution, while PCA is based on the assumption that data follows a Gaussian distribution and (iii) It is more robust compared to PCA, i.e., a perturbation of the underlying data or changes in parameters used will have a less significant effect on the output of it than PCA. Experiments and analysis on simulated and real datasets are used to validate our claims.", "While dealing with sensitive personnel data, the data have to be maintained to preserve integrity and usefulness. The mechanisms of the natural immune system are very promising in this area, it being an efficient anomaly or change detection system. This paper reports anomaly detection results with single and multidimensional data sets using the negative selection algorithm developed by (1994).", "This paper deals with finding outliers (exceptions) in large, multidimensional datasets. The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even the analysis of performance statistics of professional athletes. Existing methods that we have seen for finding outliers can only deal efficiently with two dimensions attributes of a dataset. In this paper, we study the notion of DB (distance-based) outliers. Specifically, we show that (i) outlier detection can be done efficiently for large datasets, and for k-dimensional datasets with large values of k (e.g., @math ); and (ii), outlier detection is a meaningful and important knowledge discovery task.First, we present two simple algorithms, both having a complexity of @math , k being the dimensionality and N being the number of objects in the dataset. These algorithms readily support datasets with many more than two attributes. Second, we present an optimized cell-based algorithm that has a complexity that is linear with respect to N, but exponential with respect to k. We provide experimental results indicating that this algorithm significantly outperforms the two simple algorithms for @math . Third, for datasets that are mainly disk-resident, we present another version of the cell-based algorithm that guarantees at most three passes over a dataset. Again, experimental results show that this algorithm is by far the best for @math . Finally, we discuss our work on three real-life applications, including one on spatio-temporal data (e.g., a video surveillance application), in order to confirm the relevance and broad applicability of DB outliers.", "" ] }
1606.05963
2950447414
It is hard to operate and debug systems like OpenStack that integrate many independently developed modules with multiple levels of abstractions. A major challenge is to navigate through the complex dependencies and relationships of the states in different modules or subsystems, to ensure the correctness and consistency of these states. We present a system that captures the runtime states and events from the entire OpenStack-Ceph stack, and automatically organizes these data into a graph that we call system operation state graph (SOSG).With SOSG we can use intuitive graph traversal techniques to solve problems like reasoning about the state of a virtual machine. Also, using graph-based anomaly detection, we can automatically discover hidden problems in OpenStack. We have a scalable implementation of SOSG, and evaluate the approach on a 125-node production OpenStack cluster, finding a number of interesting problems.
Scalable graph computation. The recent development of efficient graph computation frameworks, such as Pregel @cite_14 , Power-Graph @cite_17 , GraphX and Apache Giraph @cite_15 , enables our approach. Specifically, we use GraphX to process the giant state graph efficiently.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_17" ], "mid": [ "", "2170616854", "2096544401" ], "abstract": [ "", "Many practical computing problems concern large graphs. Standard examples include the Web graph and various social networks. The scale of these graphs - in some cases billions of vertices, trillions of edges - poses challenges to their efficient processing. In this paper we present a computational model suitable for this task. Programs are expressed as a sequence of iterations, in each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. This vertex-centric approach is flexible enough to express a broad set of algorithms. The model has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier. Distribution-related details are hidden behind an abstract API. The result is a framework for processing large graphs that is expressive and easy to program.", "While high-level data parallel frameworks, like MapReduce, simplify the design and implementation of large-scale data processing systems, they do not naturally or efficiently support many important data mining and machine learning algorithms and can lead to inefficient learning systems. To help fill this critical void, we introduced the GraphLab abstraction which naturally expresses asynchronous, dynamic, graph-parallel computation while ensuring data consistency and achieving a high degree of parallel performance in the shared-memory setting. In this paper, we extend the GraphLab framework to the substantially more challenging distributed setting while preserving strong data consistency guarantees. We develop graph based extensions to pipelined locking and data versioning to reduce network congestion and mitigate the effect of network latency. We also introduce fault tolerance to the GraphLab abstraction using the classic Chandy-Lamport snapshot algorithm and demonstrate how it can be easily implemented by exploiting the GraphLab abstraction itself. Finally, we evaluate our distributed implementation of the GraphLab abstraction on a large Amazon EC2 deployment and show 1-2 orders of magnitude performance gains over Hadoop-based implementations." ] }
1606.05963
2950447414
It is hard to operate and debug systems like OpenStack that integrate many independently developed modules with multiple levels of abstractions. A major challenge is to navigate through the complex dependencies and relationships of the states in different modules or subsystems, to ensure the correctness and consistency of these states. We present a system that captures the runtime states and events from the entire OpenStack-Ceph stack, and automatically organizes these data into a graph that we call system operation state graph (SOSG).With SOSG we can use intuitive graph traversal techniques to solve problems like reasoning about the state of a virtual machine. Also, using graph-based anomaly detection, we can automatically discover hidden problems in OpenStack. We have a scalable implementation of SOSG, and evaluate the approach on a 125-node production OpenStack cluster, finding a number of interesting problems.
Knowledge base and knowledge graphs. The property graph is a special case of the knowledge base (KB), a well-studied topic in data mining. There are many popular knowledge base systems, such as Knowledge Vaults @cite_1 , YAGO yago1, yago2, yago3 , DBpedia @cite_4 , Freebase @cite_9 , and NELL @cite_29 . And also many efforts are devoted to build these KBs @cite_34 @cite_30 . Our state graph is similar to the knowledge graph, but it specifically targets machine generated system states, and thus can be built automatically.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_9", "@cite_29", "@cite_1", "@cite_34" ], "mid": [ "804133461", "102708294", "2094728533", "1512387364", "2016753842", "2133134975" ], "abstract": [ "We present YAGO3, an extension of the YAGO knowledge base that combines the information from the Wikipedias in multiple languages. Our technique fuses the multilingual information with the English WordNet to build one coherent knowledge base. We make use of the categories, the infoboxes, and Wikidata, and learn the meaning of infobox attributes across languages. We run our method on 10 different languages, and achieve a precision of 95 -100 in the attribute mapping. Our technique enlarges YAGO by 1m new entities and 7m new facts.", "DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.", "Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications.", "We consider here the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day. In particular, we propose an approach and a set of design principles for such an agent, describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74 after running for 67 days, and discuss lessons learned from this preliminary attempt to build a never-ending learning agent.", "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods.", "There are major trends to advance the functionality of search engines to a more expressive semantic level. This is enabled by the advent of knowledge-sharing communities such as Wikipedia and the progress in automatically extracting entities and relationships from semistructured as well as natural-language Web sources. Recent endeavors of this kind include DBpedia, EntityCube, KnowItAll, ReadTheWeb, and our own YAGO-NAGA project (and others). The goal is to automatically construct and maintain a comprehensive knowledge base of facts about named entities, their semantic classes, and their mutual relations as well as temporal contexts, with high precision and high recall. This tutorial discusses state-of-the-art methods, research opportunities, and open challenges along this avenue of knowledge harvesting." ] }
1606.05859
2462181211
Point-of-interest (POI) recommendation is an important application in location-based social networks (LBSNs), which learns the user preference and mobility pattern from check-in sequences to recommend POIs. However, previous POI recommendation systems model check-in sequences based on either tensor factorization or Markov chain model, which cannot capture contextual check-in information in sequences. The contextual check-in information implies the complementary functions among POIs that compose an individual's daily check-in sequence. In this paper, we exploit the embedding learning technique to capture the contextual check-in information and further propose the quential mbedding ank () model for POI recommendation. In particular, the model learns user preferences via a pairwise ranking model under the sequential constraint modeled by the POI embedding learning method. Furthermore, we incorporate two important factors, i.e., temporal influence and geographical influence, into the model to enhance the POI recommendation system. Due to the temporal variance of sequences on different days, we propose a temporal POI embedding model and incorporate the temporal POI representations into a temporal preference ranking model to establish the emporal () model. In addition, We incorporate the geographical influence into the model and develop the () model.
POI recommendation has attracted intensive academic attention recently. Most of proposed methods base on the Collaborative Filtering (CF) techniques to learn user preferences on POIs. Researchers in @cite_3 @cite_18 @cite_24 employ the user-based CF to recommend POIs, while, other researchers @cite_36 @cite_9 @cite_44 @cite_4 @cite_8 leverage the model-based CF, i.e., Matrix Factorization (MF) @cite_34 . Furthermore, Some researchers @cite_21 @cite_39 observe that it is better to treat the check-ins as implicit feedback than the explicit way. They utilize the weighted regularized MF @cite_0 to model this kind of implicit feedback. Other researchers model the implicit feedback through the pairwise learning techniques, which assume users prefer the checked-ins POIs than the unchecked. Researchers in @cite_31 @cite_10 learn the pairwise preference via the Bayesian personalized ranking (BPR) loss @cite_37 . @cite_1 propose a ranking based CF model to recommend POIs, which measures the pairwise preference through the WARP loss @cite_25 .
{ "cite_N": [ "@cite_18", "@cite_37", "@cite_4", "@cite_8", "@cite_36", "@cite_9", "@cite_21", "@cite_1", "@cite_3", "@cite_39", "@cite_24", "@cite_44", "@cite_0", "@cite_31", "@cite_34", "@cite_10", "@cite_25" ], "mid": [ "2073013176", "2140310134", "2241626324", "2248044446", "", "1984189333", "2017921654", "2044672016", "2087692915", "1981886741", "1986050033", "2189936406", "2101409192", "1546409232", "2054141820", "", "2103093728" ], "abstract": [ "The availability of user check-in data in large volume from the rapid growing location based social networks (LBSNs) enables many important location-aware services to users. Point-of-interest (POI) recommendation is one of such services, which is to recommend places where users have not visited before. Several techniques have been recently proposed for the recommendation service. However, no existing work has considered the temporal information for POI recommendations in LBSNs. We believe that time plays an important role in POI recommendations because most users tend to visit different places at different time in a day, visiting a restaurant at noon and visiting a bar at night. In this paper, we define a new problem, namely, the time-aware POI recommendation, to recommend POIs for a given user at a specified time in a day. To solve the problem, we develop a collaborative recommendation model that is able to incorporate temporal information. Moreover, based on the observation that users tend to visit nearby POIs, we further enhance the recommendation model by considering geographical information. Our experimental results on two real-world datasets show that the proposed approach outperforms the state-of-the-art POI recommendation methods substantially.", "Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.", "With the rapid development of Location-based Social Network (LBSN) services, a large number of Point-Of-Interests (POIs) have been available, which consequently raises a great demand of building personalized POI recommender systems. A personalized POI recommender system can significantly assist users to find their preferred POIs and help POI owners to attract more customers. However, it is very challenging to develop a personalized POI recommender system because a user's checkin decision making process is very complex and could be influenced by many factors such as social network and geographical distance. In the literature, a variety of methods have been proposed to tackle this problem. Most of these methods model user's preference for POIs with integrated approaches and consider all candidate POIs as a whole space. However, by carefully examining a longitudinal real-world checkin data, we find that the whole space of users' checkins actually consists of two parts: social friend space and user interest space. The social friend space denotes the set of POI candidates that users' friends have checked-in before and the user interest space refers to the set of POI candidates that are similar to users' historical checkins, but are not visited by their friends yet. Along this line, we develop separate models for the both spaces to recommend POIs. Specifically, in social friend space, we assume users would repeat their friends' historical POIs due to the preference propagation through social networks, and propose a new Social Friend Probabilistic Matrix Factorization (SFPMF) model. In user interest space, we propose a new User Interest Probabilistic Matrix Factorization (UIPMF) model to capture the correlations between a new POI and one user's historical POIs. To evaluate the proposed models, we conduct extensive experiments with many state-of-the-art baseline methods and evaluation metrics on the real-world data set. The experimental results firmly demonstrate the effectiveness of our proposed models.", "Location recommendation plays an essential role in helping people find places they are likely to enjoy. Though some recent research has studied how to recommend locations with the presence of social network and geographical information, few of them addressed the cold-start problem, specifically, recommending locations for new users. Because the visits to locations are often shared on social networks, rich semantics (e.g., tweets) that reveal a person's interests can be leveraged to tackle this challenge. A typical way is to feed them into traditional explicit-feedback content-aware recommendation methods (e.g., LibFM). As a user's negative preferences are not explicitly observable in most human mobility data, these methods need draw negative samples for better learning performance. However, prior studies have empirically shown that sampling-based methods don't perform as well as a method that considers all unvisited locations as negative but assigns them a lower confidence. To this end, we propose an Implicit-feedback based Content-aware Collaborative Filtering (ICCF) framework to incorporate semantic content and steer clear of negative sampling. For efficient parameter learning, we develop a scalable optimization algorithm, scaling linearly with the data size and the feature size. Furthermore, we offer a good explanation to ICCF, such that the semantic content is actually used to refine user similarity based on mobility. Finally, we evaluate ICCF with a large-scale LBSN dataset where users have profiles and text content. The results show that ICCF outperforms LibFM of the best configuration, and that user profiles and text content are not only effective at improving recommendation but also helpful for coping with the cold-start problem.", "", "Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance.", "Point-of-Interest (POI) recommendation has become an important means to help people discover attractive locations. However, extreme sparsity of user-POI matrices creates a severe challenge. To cope with this challenge, viewing mobility records on location-based social networks (LBSNs) as implicit feedback for POI recommendation, we first propose to exploit weighted matrix factorization for this task since it usually serves collaborative filtering with implicit feedback better. Besides, researchers have recently discovered a spatial clustering phenomenon in human mobility behavior on the LBSNs, i.e., individual visiting locations tend to cluster together, and also demonstrated its effectiveness in POI recommendation, thus we incorporate it into the factorization model. Particularly, we augment users' and POIs' latent factors in the factorization model with activity area vectors of users and influence area vectors of POIs, respectively. Based on such an augmented model, we not only capture the spatial clustering phenomenon in terms of two-dimensional kernel density estimation, but we also explain why the introduction of such a phenomenon into matrix factorization helps to deal with the challenge from matrix sparsity. We then evaluate the proposed algorithm on a large-scale LBSN dataset. The results indicate that weighted matrix factorization is superior to other forms of factorization models and that incorporating the spatial clustering phenomenon into matrix factorization improves recommendation performance.", "With the rapid growth of location-based social networks, Point of Interest (POI) recommendation has become an important research problem. However, the scarcity of the check-in data, a type of implicit feedback data, poses a severe challenge for existing POI recommendation methods. Moreover, different types of context information about POIs are available and how to leverage them becomes another challenge. In this paper, we propose a ranking based geographical factorization method, called Rank-GeoFM, for POI recommendation, which addresses the two challenges. In the proposed model, we consider that the check-in frequency characterizes users' visiting preference and learn the factorization by ranking the POIs correctly. In our model, POIs both with and without check-ins will contribute to learning the ranking and thus the data sparsity problem can be alleviated. In addition, our model can easily incorporate different types of context information, such as the geographical influence and temporal influence. We propose a stochastic gradient descent based algorithm to learn the factorization. Experiments on publicly available datasets under both user-POI setting and user-time-POI setting have been conducted to test the effectiveness of the proposed method. Experimental results under both settings show that the proposed method outperforms the state-of-the-art methods significantly in terms of recommendation accuracy.", "In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.", "Geographical characteristics derived from the historical check-in data have been reported effective in improving location recommendation accuracy. However, previous studies mainly exploit geographical characteristics from a user's perspective, via modeling the geographical distribution of each individual user's check-ins. In this paper, we are interested in exploiting geographical characteristics from a location perspective, by modeling the geographical neighborhood of a location. The neighborhood is modeled at two levels: the instance-level neighborhood defined by a few nearest neighbors of the location, and the region-level neighborhood for the geographical region where the location exists. We propose a novel recommendation approach, namely Instance-Region Neighborhood Matrix Factorization (IRenMF), which exploits two levels of geographical neighborhood characteristics: a) instance-level characteristics, i.e., nearest neighboring locations tend to share more similar user preferences; and b) region-level characteristics, i.e., locations in the same geographical region may share similar user preferences. In IRenMF, the two levels of geographical characteristics are naturally incorporated into the learning of latent features of users and locations, so that IRenMF predicts users' preferences on locations more accurately. Extensive experiments on the real data collected from Gowalla, a popular LBSN, demonstrate the effectiveness and advantages of our approach.", "With the rapidly growing location-based social networks (LBSNs), personalized geo-social recommendation becomes an important feature for LBSNs. Personalized geo-social recommendation not only helps users explore new places but also makes LBSNs more prevalent to users. In LBSNs, aside from user preference and social influence, geographical influence has also been intensively exploited in the process of location recommendation based on the fact that geographical proximity significantly affects users' check-in behaviors. Although geographical influence on users should be personalized, current studies only model the geographical influence on all users' check-in behaviors in a universal way. In this paper, we propose a new framework called iGSLR to exploit personalized social and geographical influence on location recommendation. iGSLR uses a kernel density estimation approach to personalize the geographical influence on users' check-in behaviors as individual distributions rather than a universal distribution for all users. Furthermore, user preference, social influence, and personalized geographical influence are integrated into a unified geo-social recommendation framework. We conduct a comprehensive performance evaluation for iGSLR using two large-scale real data sets collected from Foursquare and Gowalla which are two of the most popular LBSNs. Experimental results show that iGSLR provides significantly superior location recommendation compared to other state-of-the-art geo-social recommendation techniques.", "The rapid urban expansion has greatly extended the physical boundary of users' living area and developed a large number of POIs (points of interest). POI recommendation is a task that facilitates users' urban exploration and helps them filter uninteresting POIs for decision making. While existing work of POI recommendation on location-based social networks (LBSNs) discovers the spatial, temporal, and social patterns of user check-in behavior, the use of content information has not been systematically studied. The various types of content information available on LBSNs could be related to different aspects of a user's check-in action, providing a unique opportunity for POI recommendation. In this work, we study the content information on LB-SNs w.r.t. POI properties, user interests, and sentiment indications. We model the three types of information under a unified POI recommendation framework with the consideration of their relationship to check-in actions. The experimental results exhibit the significance of content information in explaining user behavior, and demonstrate its power to improve POI recommendation performance on LBSNs.", "A common task of recommender systems is to improve customer experience through personalized recommendations based on prior implicit feedback. These systems passively track different sorts of user behavior, such as purchase history, watching habits and browsing activity, in order to model user preferences. Unlike the much more extensively researched explicit feedback, we do not have any direct input from the users regarding their preferences. In particular, we lack substantial evidence on which products consumer dislike. In this work we identify unique properties of implicit feedback datasets. We propose treating the data as indication of positive and negative preference associated with vastly varying confidence levels. This leads to a factor model which is especially tailored for implicit feedback recommenders. We also suggest a scalable optimization procedure, which scales linearly with the data size. The algorithm is used successfully within a recommender system for television shows. It compares favorably with well tuned implementations of other known methods. In addition, we offer a novel way to give explanations to recommendations given by this factor model.", "Personalized point-of-interest (POI) recommendation is a significant task in location-based social networks (LBSNs) as it can help provide better user experience as well as enable third-party services, e.g., launching advertisements. To provide a good recommendation, various research has been conducted in the literature. However, pervious efforts mainly consider the \"check-ins\" in a whole and omit their temporal relation. They can only recommend POI globally and cannot know where a user would like to go tomorrow or in the next few days. In this paper, we consider the task of successive personalized POI recommendation in LBSNs, which is a much harder task than standard personalized POI recommendation or prediction. To solve this task, we observe two prominent properties in the check-in sequence: personalized Markov chain and region localization. Hence, we propose a novel matrix factorization method, namely FPMC-LR, to embed the personalized Markov chains and the localized regions. Our proposed FPMC-LR not only exploits the personalized Markov chain in the check-in sequence, but also takes into account users' movement constraint, i.e., moving around a localized region. More importantly, utilizing the information of localized regions, we not only reduce the computation cost largely, but also discard the noisy information to boost recommendation. Results on two real-world LBSNs datasets demonstrate the merits of our proposed FPMC-LR.", "As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.", "", "Retrieval tasks typically require a ranking of items given a query. Collaborative filtering tasks, on the other hand, learn to model user's preferences over items. In this paper we study the joint problem of recommending items to a user with respect to a given query, which is a surprisingly common task. This setup differs from the standard collaborative filtering one in that we are given a query × user × item tensor for training instead of the more traditional user × item matrix. Compared to document retrieval we do have a query, but we may or may not have content features (we will consider both cases) and we can also take account of the user's profile. We introduce a factorized model for this new task that optimizes the top-ranked items returned for the given query and user. We report empirical results where it outperforms several baselines." ] }
1606.05859
2462181211
Point-of-interest (POI) recommendation is an important application in location-based social networks (LBSNs), which learns the user preference and mobility pattern from check-in sequences to recommend POIs. However, previous POI recommendation systems model check-in sequences based on either tensor factorization or Markov chain model, which cannot capture contextual check-in information in sequences. The contextual check-in information implies the complementary functions among POIs that compose an individual's daily check-in sequence. In this paper, we exploit the embedding learning technique to capture the contextual check-in information and further propose the quential mbedding ank () model for POI recommendation. In particular, the model learns user preferences via a pairwise ranking model under the sequential constraint modeled by the POI embedding learning method. Furthermore, we incorporate two important factors, i.e., temporal influence and geographical influence, into the model to enhance the POI recommendation system. Due to the temporal variance of sequences on different days, we propose a temporal POI embedding model and incorporate the temporal POI representations into a temporal preference ranking model to establish the emporal () model. In addition, We incorporate the geographical influence into the model and develop the () model.
Sequential influence is mined for POI recommendation. Existing studies employ the Markov chain property in consecutive check-ins to capture the sequential pattern. Specifically most of successive POI recommendation systems depend on the sequential correlations in successive check-ins @cite_31 @cite_11 @cite_43 @cite_38 . Researchers in @cite_31 @cite_11 recommend the successive POIs on the basis of Factorized Personalized Markov Chain (FPMC) model @cite_7 . @cite_43 employ the recurrent neural network (RNN) to find the sequential correlations. In addition, researchers in @cite_29 @cite_6 learn the categories' transitive pattern in sequential check-ins. @cite_33 predict the sequential transitive probability through an additive Markov chain model. However, all previous sequential models cannot capture contextual check-in information from the whole sequence. Hence, we propose a POI embedding method to learn sequential POIs' representations, which captures the check-ins' contextual relations in a sequence.
{ "cite_N": [ "@cite_38", "@cite_33", "@cite_7", "@cite_29", "@cite_6", "@cite_43", "@cite_31", "@cite_11" ], "mid": [ "2059512502", "2074194940", "", "", "2408569144", "2539781657", "1546409232", "2205235818" ], "abstract": [ "In location-based social networks (LBSNs), new successive point-of-interest (POI) recommendation is a newly formulated task which tries to regard the POI a user currently visits as his POI-related query and recommend new POIs the user has not visited before. While carefully designed methods are proposed to solve this problem, they ignore the essence of the task which involves retrieval and recommendation problem simultaneously and fail to employ the social relations or temporal information adequately to improve the results. In order to solve this problem, we propose a new model called location and time aware social collaborative retrieval model (LTSCR), which has two distinct advantages: (1) it models the location, time, and social information simultaneously for the successive POI recommendation task; (2) it efficiently utilizes the merits of the collaborative retrieval model which leverages weighted approximately ranked pairwise (WARP) loss for achieving better top-n ranking results, just as the new successive POI recommendation task needs. We conducted some comprehensive experiments on publicly available datasets and demonstrate the power of the proposed method, with 46.6 growth in Precision@5 and 47.3 improvement in Recall@5 over the best previous method.", "Providing location recommendations becomes an important feature for location-based social networks (LBSNs), since it helps users explore new places and makes LBSNs more prevalent to users. In LBSNs, geographical influence and social influence have been intensively used in location recommendations based on the facts that geographical proximity of locations significantly affects users' check-in behaviors and social friends often have common interests. Although human movement exhibits sequential patterns, most current studies on location recommendations do not consider any sequential influence of locations on users' check-in behaviors. In this paper, we propose a new approach called LORE to exploit sequential influence on location recommendations. First, LORE incrementally mines sequential patterns from location sequences and represents the sequential patterns as a dynamic Location-Location Transition Graph (L2TG). LORE then predicts the probability of a user visiting a location by Additive Markov Chain (AMC) with L2TG. Finally, LORE fuses sequential influence with geographical influence and social influence into a unified recommendation framework; in particular the geographical influence is modeled as two-dimensional check-in probability distributions rather than one-dimensional distance probability distributions in existing works. We conduct a comprehensive performance evaluation for LORE using two large-scale real data sets collected from Foursquare and Gowalla. Experimental results show that LORE achieves significantly superior location recommendations compared to other state-of-the-art recommendation techniques.", "", "", "Location-based social networks have been gaining increasing popularity in recent years. To increase users’ engagement with location-based services, it is important to provide attractive features, one of which is geo-targeted ads and coupons. To make ads and coupon delivery more effective, it is essential to predict the location that is most likely to be visited by a user at the next step. However, an inherent challenge in location prediction is a huge prediction space, with millions of distinct check-in locations as prediction target. In this paper we exploit the check-in category information to model the underlying user movement pattern. We propose a framework which uses a mixed hidden Markov model to predict the category of user activity at the next step and then predict the most likely location given the estimated category distribution. The advantages of modeling the category level include a significantly reduced prediction space and a precise expression of the semantic meaning of user activities. Extensive experimental results show that, with the predicted category distribution, the number of location candidates for prediction is 5.45 times smaller, while the prediction accuracy is 13.21 higher.", "Spatial and temporal contextual information plays a key role for analyzing user behaviors, and is helpful for predicting where he or she will go next. With the growing ability of collecting information, more and more temporal and spatial contextual information is collected in systems, and the location prediction problem becomes crucial and feasible. Some works have been proposed to address this problem, but they all have their limitations. Factorizing Personalized Markov Chain (FPMC) is constructed based on a strong independence assumption among different factors, which limits its performance. Tensor Factorization (TF) faces the cold start problem in predicting future actions. Recurrent Neural Networks (RNN) model shows promising performance comparing with PFMC and TF, but all these methods have problem in modeling continuous time interval and geographical distance. In this paper, we extend RNN and propose a novel method called Spatial Temporal Recurrent Neural Networks (ST-RNN). ST-RNN can model local temporal and spatial contexts in each layer with time-specific transition matrices for different time intervals and distance-specific transition matrices for different geographical distances. Experimental results show that the proposed ST-RNN model yields significant improvements over the competitive compared methods on two typical datasets, i.e., Global Terrorism Database (GTD) and Gowalla dataset.", "Personalized point-of-interest (POI) recommendation is a significant task in location-based social networks (LBSNs) as it can help provide better user experience as well as enable third-party services, e.g., launching advertisements. To provide a good recommendation, various research has been conducted in the literature. However, pervious efforts mainly consider the \"check-ins\" in a whole and omit their temporal relation. They can only recommend POI globally and cannot know where a user would like to go tomorrow or in the next few days. In this paper, we consider the task of successive personalized POI recommendation in LBSNs, which is a much harder task than standard personalized POI recommendation or prediction. To solve this task, we observe two prominent properties in the check-in sequence: personalized Markov chain and region localization. Hence, we propose a novel matrix factorization method, namely FPMC-LR, to embed the personalized Markov chains and the localized regions. Our proposed FPMC-LR not only exploits the personalized Markov chain in the check-in sequence, but also takes into account users' movement constraint, i.e., moving around a localized region. More importantly, utilizing the information of localized regions, we not only reduce the computation cost largely, but also discard the noisy information to boost recommendation. Results on two real-world LBSNs datasets demonstrate the merits of our proposed FPMC-LR.", "The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods." ] }
1606.05859
2462181211
Point-of-interest (POI) recommendation is an important application in location-based social networks (LBSNs), which learns the user preference and mobility pattern from check-in sequences to recommend POIs. However, previous POI recommendation systems model check-in sequences based on either tensor factorization or Markov chain model, which cannot capture contextual check-in information in sequences. The contextual check-in information implies the complementary functions among POIs that compose an individual's daily check-in sequence. In this paper, we exploit the embedding learning technique to capture the contextual check-in information and further propose the quential mbedding ank () model for POI recommendation. In particular, the model learns user preferences via a pairwise ranking model under the sequential constraint modeled by the POI embedding learning method. Furthermore, we incorporate two important factors, i.e., temporal influence and geographical influence, into the model to enhance the POI recommendation system. Due to the temporal variance of sequences on different days, we propose a temporal POI embedding model and incorporate the temporal POI representations into a temporal preference ranking model to establish the emporal () model. In addition, We incorporate the geographical influence into the model and develop the () model.
Temporal influence is mined for POI recommendation in prior work @cite_31 @cite_13 @cite_9 @cite_18 . Temporal characteristics can be summarized as, periodicity, non-uniformness, and consecutiveness. Periodicity is first proposed in @cite_13 , depicting the periodic pattern of user check-in activities. For instance, people always stay in their offices and surrounding places on weekdays while go to shopping malls on weekends. Non-uniformness is first proposed in @cite_9 , demonstrating that a user's check-in preferences may change at different time. For example, weekday and weekend imply different check-in preferences, work" and entertainment". In addition, consecutiveness are used in @cite_31 @cite_9 , capturing the consecutive check-ins' correlations to improve performance. In our model, the consecutiveness can be depicted in sequential modeling. Moreover, we propose the temporal POI embedding model to capture the periodicity and non-uniformness among weekday and weekend .
{ "cite_N": [ "@cite_9", "@cite_31", "@cite_13", "@cite_18" ], "mid": [ "1984189333", "1546409232", "2110953678", "2073013176" ], "abstract": [ "Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance.", "Personalized point-of-interest (POI) recommendation is a significant task in location-based social networks (LBSNs) as it can help provide better user experience as well as enable third-party services, e.g., launching advertisements. To provide a good recommendation, various research has been conducted in the literature. However, pervious efforts mainly consider the \"check-ins\" in a whole and omit their temporal relation. They can only recommend POI globally and cannot know where a user would like to go tomorrow or in the next few days. In this paper, we consider the task of successive personalized POI recommendation in LBSNs, which is a much harder task than standard personalized POI recommendation or prediction. To solve this task, we observe two prominent properties in the check-in sequence: personalized Markov chain and region localization. Hence, we propose a novel matrix factorization method, namely FPMC-LR, to embed the personalized Markov chains and the localized regions. Our proposed FPMC-LR not only exploits the personalized Markov chain in the check-in sequence, but also takes into account users' movement constraint, i.e., moving around a localized region. More importantly, utilizing the information of localized regions, we not only reduce the computation cost largely, but also discard the noisy information to boost recommendation. Results on two real-world LBSNs datasets demonstrate the merits of our proposed FPMC-LR.", "Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-based social networks, we aim to understand what basic laws govern human motion and dynamics. We find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks. Short-ranged travel is periodic both spatially and temporally and not effected by the social network structure, while long-distance travel is more influenced by social network ties. We show that social relationships can explain about 10 to 30 of all human movement, while periodic behavior explains 50 to 70 . Based on our findings, we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure. We show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility.", "The availability of user check-in data in large volume from the rapid growing location based social networks (LBSNs) enables many important location-aware services to users. Point-of-interest (POI) recommendation is one of such services, which is to recommend places where users have not visited before. Several techniques have been recently proposed for the recommendation service. However, no existing work has considered the temporal information for POI recommendations in LBSNs. We believe that time plays an important role in POI recommendations because most users tend to visit different places at different time in a day, visiting a restaurant at noon and visiting a bar at night. In this paper, we define a new problem, namely, the time-aware POI recommendation, to recommend POIs for a given user at a specified time in a day. To solve the problem, we develop a collaborative recommendation model that is able to incorporate temporal information. Moreover, based on the observation that users tend to visit nearby POIs, we further enhance the recommendation model by considering geographical information. Our experimental results on two real-world datasets show that the proposed approach outperforms the state-of-the-art POI recommendation methods substantially." ] }
1606.05859
2462181211
Point-of-interest (POI) recommendation is an important application in location-based social networks (LBSNs), which learns the user preference and mobility pattern from check-in sequences to recommend POIs. However, previous POI recommendation systems model check-in sequences based on either tensor factorization or Markov chain model, which cannot capture contextual check-in information in sequences. The contextual check-in information implies the complementary functions among POIs that compose an individual's daily check-in sequence. In this paper, we exploit the embedding learning technique to capture the contextual check-in information and further propose the quential mbedding ank () model for POI recommendation. In particular, the model learns user preferences via a pairwise ranking model under the sequential constraint modeled by the POI embedding learning method. Furthermore, we incorporate two important factors, i.e., temporal influence and geographical influence, into the model to enhance the POI recommendation system. Due to the temporal variance of sequences on different days, we propose a temporal POI embedding model and incorporate the temporal POI representations into a temporal preference ranking model to establish the emporal () model. In addition, We incorporate the geographical influence into the model and develop the () model.
Geographical influence plays an important role in POI recommendation, since the check-in activity in LBSNs is limited to geographical conditions. To capture the geographical influence, researchers in @cite_36 @cite_13 @cite_17 propose Gaussian distribution based models. Researchers in @cite_3 @cite_18 employ the power law distribution model. In addition, researchers in @cite_24 @cite_14 @cite_26 leverage the kernel density estimation model. Moreover, researchers in @cite_21 @cite_39 incorporate the geographical influence into a weighted regularized MF model @cite_0 @cite_15 and learn the geographical influence jointly with the user preference. Similar to @cite_21 @cite_39 , we model the check-ins as implicit feedback; yet we learn it through a Bayesian pairwise ranking method @cite_37 . Furthermore, we propose a geographical pairwise ranking model, which captures the geographical influence via discriminating the unchecked POIs according to their geographical information.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_26", "@cite_37", "@cite_36", "@cite_21", "@cite_3", "@cite_39", "@cite_24", "@cite_0", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "2073013176", "2084677224", "2077480106", "2140310134", "", "2017921654", "2087692915", "1981886741", "1986050033", "2101409192", "", "2110953678", "2295065562" ], "abstract": [ "The availability of user check-in data in large volume from the rapid growing location based social networks (LBSNs) enables many important location-aware services to users. Point-of-interest (POI) recommendation is one of such services, which is to recommend places where users have not visited before. Several techniques have been recently proposed for the recommendation service. However, no existing work has considered the temporal information for POI recommendations in LBSNs. We believe that time plays an important role in POI recommendations because most users tend to visit different places at different time in a day, visiting a restaurant at noon and visiting a bar at night. In this paper, we define a new problem, namely, the time-aware POI recommendation, to recommend POIs for a given user at a specified time in a day. To solve the problem, we develop a collaborative recommendation model that is able to incorporate temporal information. Moreover, based on the observation that users tend to visit nearby POIs, we further enhance the recommendation model by considering geographical information. Our experimental results on two real-world datasets show that the proposed approach outperforms the state-of-the-art POI recommendation methods substantially.", "Recommending users with their preferred points-of-interest (POIs), e.g., museums and restaurants, has become an important feature for location-based social networks (LBSNs), which benefits people to explore new places and businesses to discover potential customers. However, because users only check in a few POIs in an LBSN, the user-POI check-in interaction is highly sparse, which renders a big challenge for POI recommendations. To tackle this challenge, in this study we propose a new POI recommendation approach called GeoSoCa through exploiting geographical correlations, social correlations and categorical correlations among users and POIs. The geographical, social and categorical correlations can be learned from the historical check-in data of users on POIs and utilized to predict the relevance score of a user to an unvisited POI so as to make recommendations for users. First, in GeoSoCa we propose a kernel estimation method with an adaptive bandwidth to determine a personalized check-in distribution of POIs for each user that naturally models the geographical correlations between POIs. Then, GeoSoCa aggregates the check-in frequency or rating of a user's friends on a POI and models the social check-in frequency or rating as a power-law distribution to employ the social correlations between users. Further, GeoSoCa applies the bias of a user on a POI category to weigh the popularity of a POI in the corresponding category and models the weighed popularity as a power-law distribution to leverage the categorical correlations between POIs. Finally, we conduct a comprehensive performance evaluation for GeoSoCa using two large-scale real-world check-in data sets collected from Foursquare and Yelp. Experimental results show that GeoSoCa achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques.", "As location-based social networks (LBSNs) rapidly grow, it is a timely topic to study how to recommend users with interesting locations, known as points-of-interest (POIs). Most existing POI recommendation techniques only employ the check-in data of users in LBSNs to learn their preferences on POIs by assuming a user's check-in frequency to a POI explicitly reflects the level of her preference on the POI. However, in reality users usually visit POIs only once, so the users' check-ins may not be sufficient to derive their preferences using their check-in frequencies only. Actually, the preferences of users are exactly implied in their opinions in text-based tips commenting on POIs. In this paper, we propose an opinion-based POI recommendation framework called ORec to take full advantage of the user opinions on POIs expressed as tips. In ORec, there are two main challenges: (i) detecting the polarities of tips (positive, neutral or negative), and (ii) integrating them with check-in data including social links between users and geographical information of POIs. To address these two challenges, (1) we develop a supervised aspect-dependent approach to detect the polarity of a tip, and (2) we devise a method to fuse tip polarities with social links and geographical information into a unified POI recommendation framework. Finally, we conduct a comprehensive performance evaluation for ORec using two large-scale real data sets collected from Foursquare and Yelp. Experimental results show that ORec achieves significantly superior polarity detection and POI recommendation accuracy compared to other state-of-the-art polarity detection and POI recommendation techniques.", "Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.", "", "Point-of-Interest (POI) recommendation has become an important means to help people discover attractive locations. However, extreme sparsity of user-POI matrices creates a severe challenge. To cope with this challenge, viewing mobility records on location-based social networks (LBSNs) as implicit feedback for POI recommendation, we first propose to exploit weighted matrix factorization for this task since it usually serves collaborative filtering with implicit feedback better. Besides, researchers have recently discovered a spatial clustering phenomenon in human mobility behavior on the LBSNs, i.e., individual visiting locations tend to cluster together, and also demonstrated its effectiveness in POI recommendation, thus we incorporate it into the factorization model. Particularly, we augment users' and POIs' latent factors in the factorization model with activity area vectors of users and influence area vectors of POIs, respectively. Based on such an augmented model, we not only capture the spatial clustering phenomenon in terms of two-dimensional kernel density estimation, but we also explain why the introduction of such a phenomenon into matrix factorization helps to deal with the challenge from matrix sparsity. We then evaluate the proposed algorithm on a large-scale LBSN dataset. The results indicate that weighted matrix factorization is superior to other forms of factorization models and that incorporating the spatial clustering phenomenon into matrix factorization improves recommendation performance.", "In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.", "Geographical characteristics derived from the historical check-in data have been reported effective in improving location recommendation accuracy. However, previous studies mainly exploit geographical characteristics from a user's perspective, via modeling the geographical distribution of each individual user's check-ins. In this paper, we are interested in exploiting geographical characteristics from a location perspective, by modeling the geographical neighborhood of a location. The neighborhood is modeled at two levels: the instance-level neighborhood defined by a few nearest neighbors of the location, and the region-level neighborhood for the geographical region where the location exists. We propose a novel recommendation approach, namely Instance-Region Neighborhood Matrix Factorization (IRenMF), which exploits two levels of geographical neighborhood characteristics: a) instance-level characteristics, i.e., nearest neighboring locations tend to share more similar user preferences; and b) region-level characteristics, i.e., locations in the same geographical region may share similar user preferences. In IRenMF, the two levels of geographical characteristics are naturally incorporated into the learning of latent features of users and locations, so that IRenMF predicts users' preferences on locations more accurately. Extensive experiments on the real data collected from Gowalla, a popular LBSN, demonstrate the effectiveness and advantages of our approach.", "With the rapidly growing location-based social networks (LBSNs), personalized geo-social recommendation becomes an important feature for LBSNs. Personalized geo-social recommendation not only helps users explore new places but also makes LBSNs more prevalent to users. In LBSNs, aside from user preference and social influence, geographical influence has also been intensively exploited in the process of location recommendation based on the fact that geographical proximity significantly affects users' check-in behaviors. Although geographical influence on users should be personalized, current studies only model the geographical influence on all users' check-in behaviors in a universal way. In this paper, we propose a new framework called iGSLR to exploit personalized social and geographical influence on location recommendation. iGSLR uses a kernel density estimation approach to personalize the geographical influence on users' check-in behaviors as individual distributions rather than a universal distribution for all users. Furthermore, user preference, social influence, and personalized geographical influence are integrated into a unified geo-social recommendation framework. We conduct a comprehensive performance evaluation for iGSLR using two large-scale real data sets collected from Foursquare and Gowalla which are two of the most popular LBSNs. Experimental results show that iGSLR provides significantly superior location recommendation compared to other state-of-the-art geo-social recommendation techniques.", "A common task of recommender systems is to improve customer experience through personalized recommendations based on prior implicit feedback. These systems passively track different sorts of user behavior, such as purchase history, watching habits and browsing activity, in order to model user preferences. Unlike the much more extensively researched explicit feedback, we do not have any direct input from the users regarding their preferences. In particular, we lack substantial evidence on which products consumer dislike. In this work we identify unique properties of implicit feedback datasets. We propose treating the data as indication of positive and negative preference associated with vastly varying confidence levels. This leads to a factor model which is especially tailored for implicit feedback recommenders. We also suggest a scalable optimization procedure, which scales linearly with the data size. The algorithm is used successfully within a recommender system for television shows. It compares favorably with well tuned implementations of other known methods. In addition, we offer a novel way to give explanations to recommendations given by this factor model.", "", "Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-based social networks, we aim to understand what basic laws govern human motion and dynamics. We find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks. Short-ranged travel is periodic both spatially and temporally and not effected by the social network structure, while long-distance travel is more influenced by social network ties. We show that social relationships can explain about 10 to 30 of all human movement, while periodic behavior explains 50 to 70 . Based on our findings, we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure. We show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility.", "Point-of-Interest POI recommendation is a significant service for location-based social networks LBSNs. It recommends new places such as clubs, restaurants, and coffee bars to users. Whether recommended locations meet users' interests depends on three factors: user preference, social influence, and geographical influence. Hence extracting the information from users' check-in records is the key to POI recommendation in LBSNs. Capturing user preference and social influence is relatively easy since it is analogical to the methods in a movie recommender system. However, it is a new topic to capture geographical influence. Previous studies indicate that check-in locations disperse around several centers and we are able to employ Gaussian distribution based models to approximate users' check-in behaviors. Yet centers discovering methods are dissatisfactory. In this paper, we propose two models--Gaussian mixture model GMM and genetic algorithm based Gaussian mixture model GA-GMM to capture geographical influence. More specifically, we exploit GMM to automatically learn users' activity centers; further we utilize GA-GMM to improve GMM by eliminating outliers. Experimental results on a real-world LBSN dataset show that GMM beats several popular geographical capturing models in terms of POI recommendation, while GA-GMM excludes the effect of outliers and enhances GMM." ] }
1606.05859
2462181211
Point-of-interest (POI) recommendation is an important application in location-based social networks (LBSNs), which learns the user preference and mobility pattern from check-in sequences to recommend POIs. However, previous POI recommendation systems model check-in sequences based on either tensor factorization or Markov chain model, which cannot capture contextual check-in information in sequences. The contextual check-in information implies the complementary functions among POIs that compose an individual's daily check-in sequence. In this paper, we exploit the embedding learning technique to capture the contextual check-in information and further propose the quential mbedding ank () model for POI recommendation. In particular, the model learns user preferences via a pairwise ranking model under the sequential constraint modeled by the POI embedding learning method. Furthermore, we incorporate two important factors, i.e., temporal influence and geographical influence, into the model to enhance the POI recommendation system. Due to the temporal variance of sequences on different days, we propose a temporal POI embedding model and incorporate the temporal POI representations into a temporal preference ranking model to establish the emporal () model. In addition, We incorporate the geographical influence into the model and develop the () model.
@cite_20 is an effective method to learn embedding representations in word sequences. It models the words' contextual correlations in word sentences, showing better performance than the perspectives of word transitivity in sentences and word similarity. It is generally used in natural language processing @cite_16 @cite_40 . Afterwards, paragraph2vector @cite_22 and other variants @cite_32 @cite_41 are proposed to enhance the framework for specific purposes. Since the efficacy of the framework in capturing the correlations of items, is employed to the network embedding @cite_2 , user modeling @cite_28 , as well as in item modeling @cite_45 and item recommendation @cite_42 @cite_5 . These successes persuade us to exploit the framework to model POIs' representations in check-in sequences. Our POI embedding model is similar to the prod2vec model in @cite_42 and KNI model in @cite_5 . However, we incorporate the temporal variance into the framework to develop the temporal POI embedding that is a variant matching the POI recommendation task.
{ "cite_N": [ "@cite_22", "@cite_41", "@cite_28", "@cite_42", "@cite_32", "@cite_40", "@cite_45", "@cite_2", "@cite_5", "@cite_16", "@cite_20" ], "mid": [ "2131744502", "2238728730", "2217066517", "1963836406", "", "2141599568", "2251292973", "2154851992", "2222911438", "2126725946", "2153579005" ], "abstract": [ "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperforms bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.", "Most word embedding models typically represent each word using a single vector, which makes these models indiscriminative for ubiquitous homonymy and polysemy. In order to enhance discriminativeness, we employ latent topic models to assign topics for each word in the text corpus, and learn topical word embeddings (TWE) based on both words and their topics. In this way, contextual word embeddings can be flexibly obtained to measure contextual word similarity. We can also build document representations, which are more expressive than some widely-used document models such as latent topic models. In the experiments, we evaluate the TWE models on two tasks, contextual word similarity and text classification. The experimental results show that our models outperform typical word embedding models including the multi-prototype version on contextual word similarity, and also exceed latent topic models and other representative document models on text classification. The source code of this paper can be obtained from https: github.com largelymfs topical_word_embeddings.", "We present a neural network method for review rating prediction in this paper. Existing neural network methods for sentiment prediction typically only capture the semantics of texts, but ignore the user who expresses the sentiment. This is not desirable for review rating prediction as each user has an influence on how to interpret the textual content of a review. For example, the same word (e.g. \"good\") might indicate different sentiment strengths when written by different users. We address this issue by developing a new neural network that takes user information into account. The intuition is to factor in user-specific modification to the meaning of a certain word. Specifically, we extend the lexical semantic composition models and introduce a userword composition vector model (UWCVM), which effectively captures how user acts as a function affecting the continuous word representation. We integrate UWCVM into a supervised learning framework for review rating prediction, and conduct experiments on two benchmark review datasets. Experimental results demonstrate the effectiveness of our method. It shows superior performances over several strong baseline methods.", "In recent years online advertising has become increasingly ubiquitous and effective. Advertisements shown to visitors fund sites and apps that publish digital content, manage social networks, and operate e-mail services. Given such large variety of internet resources, determining an appropriate type of advertising for a given platform has become critical to financial success. Native advertisements, namely ads that are similar in look and feel to content, have had great success in news and social feeds. However, to date there has not been a winning formula for ads in e-mail clients. In this paper we describe a system that leverages user purchase history determined from e-mail receipts to deliver highly personalized product ads to Yahoo Mail users. We propose to use a novel neural language-based algorithm specifically tailored for delivering effective product recommendations, which was evaluated against baselines that included showing popular products and products predicted based on co-occurrence. We conducted rigorous offline testing using a large-scale product purchase data set, covering purchases of more than 29 million users from 172 e-commerce websites. Ads in the form of product recommendations were successfully tested on online traffic, where we observed a steady 9 lift in click-through rates over other ad formats in mail, as well as comparable lift in conversion rates. Following successful tests, the system was launched into production during the holiday season of 2014.", "", "Continuous space language models have recently demonstrated outstanding results across a variety of tasks. In this paper, we examine the vector-space word representations that are implicitly learned by the input-layer weights. We find that these representations are surprisingly good at capturing syntactic and semantic regularities in language, and that each relationship is characterized by a relation-specific vector offset. This allows vector-oriented reasoning based on the offsets between words. For example, the male female relationship is automatically learned, and with the induced vector representations, “King Man + Woman” results in a vector very close to “Queen.” We demonstrate that the word vectors capture syntactic regularities by means of syntactic analogy questions (provided with this paper), and are able to correctly answer almost 40 of the questions. We demonstrate that the word vectors capture semantic regularities by using the vector offset method to answer SemEval-2012 Task 2 questions. Remarkably, this method outperforms the best previous systems.", "Neural network methods have achieved promising results for sentiment classification of text. However, these models only use semantics of texts, while ignoring users who express the sentiment and products which are evaluated, both of which have great influences on interpreting the sentiment of text. In this paper, we address this issue by incorporating userand productlevel information into a neural network approach for document level sentiment classification. Users and products are modeled using vector space models, the representations of which capture important global clues such as individual preferences of users or overall qualities of products. Such global evidence in turn facilitates embedding learning procedure at document level, yielding better text representations. By combining evidence at user-, productand documentlevel in a unified neural framework, the proposed model achieves state-of-the-art performances on IMDB and Yelp datasets1.", "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "Social network platforms can use the data produced by their users to serve them better. One of the services these platforms provide is recommendation service. Recommendation systems can predict the future preferences of users using their past preferences. In the recommendation systems literature there are various techniques, such as neighborhood based methods, machine-learning based methods and matrix-factorization based methods. In this work, a set of well known methods from natural language processing domain, namely Word2Vec, is applied to recommendation systems domain. Unlike previous works that use Word2Vec for recommendation, this work uses non-textual features, the check-ins, and it recommends venues to visit check-in to the target users. For the experiments, a Foursquare check-in dataset is used. The results show that use of continuous vector space representations of items modeled by techniques of Word2Vec is promising for making recommendations.", "Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90 precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible." ] }
1606.05893
2468263120
We propose new privacy attacks to infer attributes (e.g., locations, occupations, and interests) of online social network users. Our attacks leverage seemingly innocent user information that is publicly available in online social networks to infer missing attributes of targeted users. Given the increasing availability of (seemingly innocent) user information online, our results have serious implications for Internet privacy -- private attributes can be inferred from users' publicly available data unless we take steps to protect users from such inference attacks. To infer attributes of a targeted user, existing inference attacks leverage either the user's publicly available social friends or the user's behavioral records (e.g., the webpages that the user has liked on Facebook, the apps that the user has reviewed on Google Play), but not both. As we will show, such inference attacks achieve limited success rates. However, the problem becomes qualitatively different if we consider both social friends and behavioral records. To address this challenge, we develop a novel model to integrate social friends and behavioral records and design new attacks based on our model. We theoretically and experimentally demonstrate the effectiveness of our attacks. For instance, we observe that, in a real-world large-scale dataset with 1.1 million users, our attack can correctly infer the cities a user lived in for 57 of the users, via confidence estimation, we are able to increase the attack success rate to over 90 if the attacker selectively attacks a half of the users. Moreover, we show that our attack can correctly infer attributes for significantly more users than previous attacks.
Friend-based attribute inference @cite_13 transformed attribute inference to Bayesian inference on a Bayesian network that is constructed using the social links between users. They evaluated their method using a LiveJournal social network dataset with user attributes. Moreover, it is well known in the machine learning community that Bayesian inference is not scalable. @cite_43 modified Naive Bayes classifier to incorporate social links and other attributes of users to infer some attribute. For instance, to infer a user's major, their method used the user's other attributes such as employer and cities lived, the user's social friends and their attributes. However, their approach is not applicable to users that share no attributes at all.
{ "cite_N": [ "@cite_43", "@cite_13" ], "mid": [ "2135866194", "2612813645" ], "abstract": [ "On-line social networks, such as Facebook, are increasingly utilized by many users. These networks allow people to publish details about themselves and connect to their friends. Some of the information revealed inside these networks is private and it is possible that corporations could use learning algorithms on the released data to predict undisclosed private information. In this paper, we explore how to launch inference attacks using released social networking data to predict undisclosed private information about individuals. We then explore the effectiveness of possible sanitization techniques that can be used to combat such inference attacks under different scenarios.", "Since privacy information can be inferred via social relations, the privacy confidentiality problem becomes increasingly challenging as online social network services are more popular. Using a Bayesian network approach to model the causal relations among people in social networks, we study the impact of prior probability, influence strength, and society openness to the inference accuracy on a real online social network. Our experimental results reveal that. personal attributes can be inferred with high accuracy especially when people are connected with strong relationships. Further, even in a society where most people hide their attributes, it is still possible to infer privacy information." ] }
1606.05893
2468263120
We propose new privacy attacks to infer attributes (e.g., locations, occupations, and interests) of online social network users. Our attacks leverage seemingly innocent user information that is publicly available in online social networks to infer missing attributes of targeted users. Given the increasing availability of (seemingly innocent) user information online, our results have serious implications for Internet privacy -- private attributes can be inferred from users' publicly available data unless we take steps to protect users from such inference attacks. To infer attributes of a targeted user, existing inference attacks leverage either the user's publicly available social friends or the user's behavioral records (e.g., the webpages that the user has liked on Facebook, the apps that the user has reviewed on Google Play), but not both. As we will show, such inference attacks achieve limited success rates. However, the problem becomes qualitatively different if we consider both social friends and behavioral records. To address this challenge, we develop a novel model to integrate social friends and behavioral records and design new attacks based on our model. We theoretically and experimentally demonstrate the effectiveness of our attacks. For instance, we observe that, in a real-world large-scale dataset with 1.1 million users, our attack can correctly infer the cities a user lived in for 57 of the users, via confidence estimation, we are able to increase the attack success rate to over 90 if the attacker selectively attacks a half of the users. Moreover, we show that our attack can correctly infer attributes for significantly more users than previous attacks.
Zheleva and Getoor @cite_41 studied various approaches to consider both social links and groups that users joined to perform attribute inference. They found that, with only social links, the approach LINK achieves the best performance. LINK represents each user as a binary feature vector, and a feature has a value of 1 if the user is a friend of the person that corresponds to the feature. Then LINK learns classifiers for attribute inference using these feature vectors. @cite_16 transformed attribute inference to a link prediction problem. Moreover, they showed that their approaches CN-SAN, AA-SAN, and RWwR-SAN outperform LINK.
{ "cite_N": [ "@cite_41", "@cite_16" ], "mid": [ "2103133870", "2107933610" ], "abstract": [ "In order to address privacy concerns, many social media websites allow users to hide their personal profiles from the public. In this work, we show how an adversary can exploit an online social network with a mixture of public and private user profiles to predict the private attributes of users. We map this problem to a relational classification problem and we propose practical models that use friendship and group membership information (which is often not hidden) to infer sensitive attributes. The key novel idea is that in addition to friendship links, groups can be carriers of significant information. We show that on several well-known social media sites, we can easily and accurately recover the information of private-profile users. To the best of our knowledge, this is the first work that uses link-based and group-based classification to study privacy implications in social networks with mixed public and private user profiles.", "The effects of social influence and homophily suggest that both network structure and node-attribute information should inform the tasks of link prediction and node-attribute inference. Recently, [2010a, 2010b] proposed an attribute-augmented social network model, which we call Social-Attribute Network (SAN), to integrate network structure and node attributes to perform both link prediction and attribute inference. They focused on generalizing the random walk with a restart algorithm to the SAN framework and showed improved performance. In this article, we extend the SAN framework with several leading supervised and unsupervised link-prediction algorithms and demonstrate performance improvement for each algorithm on both link prediction and attribute inference. Moreover, we make the novel observation that attribute inference can help inform link prediction, that is, link-prediction accuracy is further improved by first inferring missing attributes. We comprehensively evaluate these algorithms and compare them with other existing algorithms using a novel, large-scale Googlep dataset, which we make publicly available (rlhttp: www.cs.berkeley.edu ∼stevgong gplus.html)." ] }
1606.05893
2468263120
We propose new privacy attacks to infer attributes (e.g., locations, occupations, and interests) of online social network users. Our attacks leverage seemingly innocent user information that is publicly available in online social networks to infer missing attributes of targeted users. Given the increasing availability of (seemingly innocent) user information online, our results have serious implications for Internet privacy -- private attributes can be inferred from users' publicly available data unless we take steps to protect users from such inference attacks. To infer attributes of a targeted user, existing inference attacks leverage either the user's publicly available social friends or the user's behavioral records (e.g., the webpages that the user has liked on Facebook, the apps that the user has reviewed on Google Play), but not both. As we will show, such inference attacks achieve limited success rates. However, the problem becomes qualitatively different if we consider both social friends and behavioral records. To address this challenge, we develop a novel model to integrate social friends and behavioral records and design new attacks based on our model. We theoretically and experimentally demonstrate the effectiveness of our attacks. For instance, we observe that, in a real-world large-scale dataset with 1.1 million users, our attack can correctly infer the cities a user lived in for 57 of the users, via confidence estimation, we are able to increase the attack success rate to over 90 if the attacker selectively attacks a half of the users. Moreover, we show that our attack can correctly infer attributes for significantly more users than previous attacks.
@cite_4 studied the inference of attributes such as gender, political views, and religious views. They used multi-label classification methods and leveraged features from users' friends and wall posts. Moreover, they proposed the concept of multi-party privacy to defend against attribute inference.
{ "cite_N": [ "@cite_4" ], "mid": [ "1602027763" ], "abstract": [ "As the popularity of social networks expands, the information users expose to the public has potentially dangerous implications for individual privacy. While social networks allow users to restrict access to their personal data, there is currently no mechanism to enforce privacy concerns over content uploaded by other users. As group photos and stories are shared by friends and family, personal privacy goes beyond the discretion of what a user uploads about himself and becomes an issue of what every network participant reveals. In this paper, we examine how the lack of joint privacy controls over content can inadvertently reveal sensitive information about a user including preferences, relationships, conversations, and photos. Specifically, we analyze Facebook to identify scenarios where conflicting privacy settings between friends will reveal information that at least one user intended remain private. By aggregating the information exposed in this manner, we demonstrate how a user's private attributes can be inferred from simply being listed as a friend or mentioned in a story. To mitigate this threat, we show how Facebook's privacy model can be adapted to enforce multi-party privacy. We present a proof of concept application built into Facebook that automatically ensures mutually acceptable privacy restrictions are enforced on group content." ] }
1606.05893
2468263120
We propose new privacy attacks to infer attributes (e.g., locations, occupations, and interests) of online social network users. Our attacks leverage seemingly innocent user information that is publicly available in online social networks to infer missing attributes of targeted users. Given the increasing availability of (seemingly innocent) user information online, our results have serious implications for Internet privacy -- private attributes can be inferred from users' publicly available data unless we take steps to protect users from such inference attacks. To infer attributes of a targeted user, existing inference attacks leverage either the user's publicly available social friends or the user's behavioral records (e.g., the webpages that the user has liked on Facebook, the apps that the user has reviewed on Google Play), but not both. As we will show, such inference attacks achieve limited success rates. However, the problem becomes qualitatively different if we consider both social friends and behavioral records. To address this challenge, we develop a novel model to integrate social friends and behavioral records and design new attacks based on our model. We theoretically and experimentally demonstrate the effectiveness of our attacks. For instance, we observe that, in a real-world large-scale dataset with 1.1 million users, our attack can correctly infer the cities a user lived in for 57 of the users, via confidence estimation, we are able to increase the attack success rate to over 90 if the attacker selectively attacks a half of the users. Moreover, we show that our attack can correctly infer attributes for significantly more users than previous attacks.
Behavior-based attribute inference @cite_36 investigated the inference of gender using the rating scores that users gave to different movies. In particular, they constructed a feature vector for each user; the @math th entry of the feature vector is the rating score that the user gave to the @math th movie if the user reviewed the @math th movie, otherwise the @math th entry is 0. They compared a few classifiers including Logistic Regression (LG) @cite_32 , SVM @cite_2 , and Naive Bayes @cite_10 , and they found that LG outperforms the other approaches. @cite_9 studied attribute inference in an active learning framework. Specifically, they investigated which movies we should ask users to review in order to improve the inference accuracy the most. However, this approach might not be applicable in real-world scenarios because users might not be interested in reviewing the selected movies.
{ "cite_N": [ "@cite_36", "@cite_9", "@cite_32", "@cite_2", "@cite_10" ], "mid": [ "2159196732", "2118675731", "1973948212", "", "1550206324" ], "abstract": [ "User demographics, such as age, gender and ethnicity, are routinely used for targeting content and advertising products to users. Similarly, recommender systems utilize user demographics for personalizing recommendations and overcoming the cold-start problem. Often, privacy-concerned users do not provide these details in their online profiles. In this work, we show that a recommender system can infer the gender of a user with high accuracy, based solely on the ratings provided by users (without additional metadata), and a relatively small number of users who share their demographics. Focusing on gender, we design techniques for effectively adding ratings to a user's profile for obfuscating the user's gender, while having an insignificant effect on the recommendations provided to that user.", "Recommender systems leverage user demographic information, such as age, gender, etc., to personalize recommendations and better place their targeted ads. Oftentimes, users do not volunteer this information due to privacy concerns, or due to a lack of initiative in filling out their online profiles. We illustrate a new threat in which a recommender learns private attributes of users who do not voluntarily disclose them. We design both passive and active attacks that solicit ratings for strategically selected items, and could thus be used by a recommender system to pursue this hidden agenda. Our methods are based on a novel usage of Bayesian matrix factorization in an active learning setting. Evaluations on multiple datasets illustrate that such attacks are indeed feasible and use significantly fewer rated items than static inference methods. Importantly, they succeed without sacrificing the quality of recommendations to users.", "Introduction to the Logistic Regression Model Multiple Logistic Regression Interpretation of the Fitted Logistic Regression Model Model-Building Strategies and Methods for Logistic Regression Assessing the Fit of the Model Application of Logistic Regression with Different Sampling Models Logistic Regression for Matched Case-Control Studies Special Topics References Index.", "", "Recent work in text classification has used two different first-order probabilistic models for classification, both of which make the naive Bayes assumption. Some use a multi-variate Bernoulli model, that is, a Bayesian Network with no dependencies between words and binary word features (e.g. Larkey and Croft 1996; Koller and Sahami 1997). Others use a multinomial model, that is, a uni-gram language model with integer word counts (e.g. Lewis and Gale 1994; Mitchell 1997). This paper aims to clarify the confusion by describing the differences and details of these two models, and by empirically comparing their classification performance on five text corpora. We find that the multi-variate Bernoulli performs well with small vocabulary sizes, but that the multinomial performs usually performs even better at larger vocabulary sizes--providing on average a 27 reduction in error over the multi-variate Bernoulli model at any vocabulary size." ] }
1606.05893
2468263120
We propose new privacy attacks to infer attributes (e.g., locations, occupations, and interests) of online social network users. Our attacks leverage seemingly innocent user information that is publicly available in online social networks to infer missing attributes of targeted users. Given the increasing availability of (seemingly innocent) user information online, our results have serious implications for Internet privacy -- private attributes can be inferred from users' publicly available data unless we take steps to protect users from such inference attacks. To infer attributes of a targeted user, existing inference attacks leverage either the user's publicly available social friends or the user's behavioral records (e.g., the webpages that the user has liked on Facebook, the apps that the user has reviewed on Google Play), but not both. As we will show, such inference attacks achieve limited success rates. However, the problem becomes qualitatively different if we consider both social friends and behavioral records. To address this challenge, we develop a novel model to integrate social friends and behavioral records and design new attacks based on our model. We theoretically and experimentally demonstrate the effectiveness of our attacks. For instance, we observe that, in a real-world large-scale dataset with 1.1 million users, our attack can correctly infer the cities a user lived in for 57 of the users, via confidence estimation, we are able to increase the attack success rate to over 90 if the attacker selectively attacks a half of the users. Moreover, we show that our attack can correctly infer attributes for significantly more users than previous attacks.
@cite_33 used the information about the musics users like to infer attributes. They augmented the musics with the corresponding Wikipedia pages and then used topic modeling techniques to identify the latent similarities between musics. A user is predicted to share attributes with those that like similar musics with the user. @cite_21 tried to infer various attributes based on the list of pages that users liked on Facebook. Similar to the work performed by @cite_36 , they constructed a feature vector from the Facebook likes and used Logistic Regression to train classifiers to distinguish users with different attributes. @cite_12 constructed a model to infer household structures using users' viewing behaviors in Internet Protocol Television (IPTV) systems, and they showed promising results.
{ "cite_N": [ "@cite_36", "@cite_21", "@cite_33", "@cite_12" ], "mid": [ "2159196732", "2153803020", "2279779665", "2077386717" ], "abstract": [ "User demographics, such as age, gender and ethnicity, are routinely used for targeting content and advertising products to users. Similarly, recommender systems utilize user demographics for personalizing recommendations and overcoming the cold-start problem. Often, privacy-concerned users do not provide these details in their online profiles. In this work, we show that a recommender system can infer the gender of a user with high accuracy, based solely on the ratings provided by users (without additional metadata), and a relatively small number of users who share their demographics. Focusing on gender, we design techniques for effectively adding ratings to a user's profile for obfuscating the user's gender, while having an insignificant effect on the recommendations provided to that user.", "We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. The proposed model uses dimensionality reduction for preprocessing the Likes data, which are then entered into logistic linear regression to predict individual psychodemographic profiles from Likes. The model correctly discriminates between homosexual and heterosexual men in 88 of cases, African Americans and Caucasian Americans in 95 of cases, and between Democrat and Republican in 85 of cases. For the personality trait “Openness,” prediction accuracy is close to the test–retest accuracy of a standard personality test. We give examples of associations between attributes and Likes and discuss implications for online personalization and privacy.", "Suppose that a Facebook user, whose age is hidden or missing, likes Britney Spears. Can you guess his her age? Knowing that most Britney fans are teenagers, it is fairly easy for humans to answer this question. Interests (or \"likes\") of users is one of the highly-available on-line information. In this paper, we show how these seemingly harmless interests (e.g., music interests) can leak privacy sensitive information about users. In particular, we infer their undisclosed (private) attributes using the public attributes of other users sharing similar interests. In order to compare user-defined interest names, we extract their semantics using an ontologized version of Wikipedia and measure their similarity by applying a statistical learning method. Besides self-declared interests in music, our technique does not rely on any further information about users such as friend relationships or group belongings. Our experiments, based on more than 104K public profiles collected from Facebook and more than 2000 private profiles provided by volunteers, show that our inference technique efficiently predicts attributes that are very often hidden by users. To the best of our knowledge, this is the first time that user interests are used for profiling, and more generally, semantics-driven inference of private data is addressed.", "What you watch and when you watch say a lot about you, and such information at the aggregated level across a user population obviously provides significant insights for social and commercial applications. In this paper, we propose a model for inferring household structures based on analyzing users' viewing behaviors in Internet Protocol Television (IPTV) systems. We emphasize extracting features of viewing behaviors based on the dynamic of watching time and TV programs and training a classifier for inferring household structures according to the features. In the training phase, instead of merely using the limited labeled samples, we apply semisupervised learning strategy to obtain a graph-based model for classifying household structures from users' features. We test the proposed model on China Telecom IPTV data and demonstrate its utility in census research and system simulation. The demographic characteristics inferred by our approach match well with the population census data of Shanghai, and the inference of household structures of IPTV users gives encouraging results compared with the ground truth obtained by surveys, which opens the door for leveraging IPTV viewing data as a complementary way for time- and resource-consuming census tracking. On the other hand, the proposed model can also synthesize trace data for the simulations of IPTV systems, which provides us with a new strategy for system simulation." ] }
1606.05746
2465616432
This paper addresses the modelling of requirements for a content Recommendation System (RS) for Online Social Networks (OSNs). On OSNs, a user switches roles constantly between content generator and content receiver. The goals and softgoals are different when the user is generating a post, as opposed as replying to a post. In other words, the user is generating instances of different entities, depending on the role she has: a generator generates instances of a "post", while the receiver generates instances of a "reply". Therefore, we believe that when addressing Requirements Engineering (RE) for RS, it is necessary to distinguish these roles clearly. We aim to model an essential dynamic on OSN, namely that when a user creates (posts) content, other users can ignore that content, or themselves start generating new content in reply, or react to the initial posting. This dynamic is key to designing OSNs, because it influences how active users are, and how attractive the OSN is for existing, and to new users. We apply a well-known Goal Oriented RE (GORE) technique, namely i-star, and show that this language fails to capture this dynamic, and thus cannot be used alone to model the problem domain. Hence, in order to represent this dynamic, its relationships to other OSNs' requirements, and to capture all relevant information, we suggest using another modelling language, namely Petri Nets, on top of i-star for the modelling of the problem domain. We use Petri Nets because it is a tool that is used to simulate the dynamic and concurrent activities of a system and can be used by both practitioners and theoreticians.
@cite_55 @cite_17 proposed the AwReqs and EvoReqs. They applied to a goal model loosely based on an i-star model. AwReqs are requirements that talk about the states assumed by other requirements at runtime. They represent undesirable situations to which stakeholders would like the system to adapt in case they happen. They can be used as indicators of requirements convergence at runtime. AwReqs indicate the situation that require adaptation. EvoReqs prescribe what to do in these situations. They consist of specific changes to be carried out on the requirements model, under specific circumstances. They are modeled as Event-Condition-Action (ECA) rules that are activated if an event occurs and a certain condition holds.
{ "cite_N": [ "@cite_55", "@cite_17" ], "mid": [ "2119340473", "2112951724" ], "abstract": [ "It is often the case that stakeholders want to strengthen weaken or otherwise change their requirements for a system-to-be when certain conditions apply at runtime. For example, stakeholders may decide that if requirement R is violated more than N times in a week, it should be relaxed to a less demanding one R-. Such evolution requirements play an important role in the lifetime of a software system in that they define possible changes to requirements, along with the conditions under which these changes apply. In this paper we focus on this family of requirements, how to model them and how to operationalize them at runtime. In addition, we evaluate our proposal with a case study adopted from the literature.", "Complexity is now one of the major challenges for the IT industry [1]. Systems might become too complex to be managed by humans and, thus, will have to be self-managed: Self-configure themselves for operation, self-protect from attacks, self-heal from errors and self-tune for optimal performance [2]. (Self-)Adaptive systems evaluate their own behavior and change it when the evaluation indicates that it is not accomplishing the software's purpose or when better functionality and performance are possible [3]. To that end, we need to monitor the behavior of the running system and compare it to an explicit formulation of requirements and domain assumptions [4]. Feedback loops (e.g., the MAPE loop [2]) constitute an architectural solution for this and, as proposed by past research [5], should be a first class citizen in the design of such systems. We advocate that adaptive systems should be designed this way from as early as Requirements Engineering and that reasoning over requirements is fundamental for run-time adaptation. We therefore propose an approach for the design of adaptive systems based on requirements and inspired in control theory [6]. Our proposal is goal-oriented and targets softwareintensive socio-technical systems [7], in an attempt to integrate control-loop approaches with decentralized agents inspired approaches [8]. Our final objective is a set of extensions to state-of-the-art goal-oriented modeling languages that allow practitioners to clearly specify the requirements of adaptive systems and a run-time framework that helps developers implement such requirements. In this 2-page abstract paper, we summarize this approach." ] }
1606.05743
2471608469
This paper proposes an application-aware multipath packet forwarding framework that integrates Machine Learning Techniques (MLT) and Software Defined Networks (SDN). As the Internet provides a variety of services and their performance requirement has become heterogeneous, it is common to come across the scenario of multiple flows competing for a constrained resource such as bandwidth, less jitter or low latency path. Such factors are application specific requirement that is beyond the knowledge of a simple combination of protocol type and port number. Better overall performance could be achieved if the network is able to prioritize the flows and assign resources based on their application specific requirement. Our system prioritizes each of the flows using MLT and routes it through a path according to the flow priority and network state using SDN. The proof of concept implementation has been done on OpenvSwitch and evaluation results involving a large number of flows exhibited a significant improvement over the traditional network setup. We also report that the port number and protocol are not contributing to determine the application in the decision-making process of Machine Learning (ML).
Weiyang @cite_9 proposed a strategy to overcome traffic congestion and physical impairment in OpenFlow network. They look for an alternate path to avoid congestion by changing the path of high-data-rate IP traffic to circuit switching and also physical impairments like fiber cut. These fiber cuts can be determined by monitoring the abnormality in the path between optical nodes. But our approach proactively finds a better path and inserts the flow rules and hence avoids congestion for sensitive and delay intolerant traffic. Also, the work does not use any application classifier mechanism.
{ "cite_N": [ "@cite_9" ], "mid": [ "2171899087" ], "abstract": [ "We demonstrate multipath routing and wavelength re-assignment in OpenFlow-enabled packet-circuit network, aware of links' bandwidth utilization, applications' protocol, and quality of transmission. Applications are re-routed under traffic congestion and wavelengths are re-assigned under physical impairments." ] }
1606.05491
2429300145
We present a natural language generator based on the sequence-to-sequence approach that can be trained to produce natural language strings as well as deep syntax dependency trees from input dialogue acts, and we use it to directly compare two-step generation with separate sentence planning and surface realization stages to a joint, one-step approach. We were able to train both setups successfully using very little training data. The joint setup offers better performance, surpassing state-of-the-art with regards to n-gram-based scores while providing more relevant outputs.
While most recent NLG systems attempt to learn generation from data, the choice of a particular approach – pipeline or joint – is often arbitrary and depends on system architecture or particular generation domain. Works using the pipeline approach in SDS tend to focus on sentence planning, improving a handcrafted generator @cite_15 @cite_1 @cite_14 or using perceptron-guided A* search @cite_25 . Generators taking the joint approach employ various methods, e.g., factored language models @cite_28 , inverted parsing @cite_19 @cite_5 , or a pipeline of discriminative classifiers @cite_22 . Unlike most previous NLG systems, our generator is trainable from unaligned pairs of MR and sentences alone.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_28", "@cite_1", "@cite_19", "@cite_5", "@cite_15", "@cite_25" ], "mid": [ "2103102929", "1521413921", "2161181481", "2139079654", "2097828466", "2157812664", "2068537423", "2250530145" ], "abstract": [ "In this paper we present a new approach to controlling the behaviour of a natural language generation system by correlating internal decisions taken during free generation of a wide range of texts with the surface stylistic characteristics of the resulting outputs, and using the correlation to control the generator. This contrasts with the generate-and-test architecture adopted by most previous empirically-based generation approaches, offering a more efficient, generic and holistic method of generator control. We illustrate the approach by describing a system in which stylistic variation (in the sense of Biber (1988)) can be effectively controlled during the generation of short medical information texts.", "We present a simple, robust generation system which performs content selection and surface realization in a unified, domain-independent framework. In our approach, we break up the end-to-end generation process into a sequence of local decisions, arranged hierarchically and each trained discriminatively. We deployed our system in three different domains---Robocup sportscasting, technical weather forecasts, and common weather forecasts, obtaining results comparable to state-of-the-art domain-specific systems both in terms of BLEU scores and human evaluation.", "Most previous work on trainable language generation has focused on two paradigms: (a) using a statistical model to rank a set of generated utterances, or (b) using statistics to inform the generation decision process. Both approaches rely on the existence of a handcrafted generator, which limits their scalability to new domains. This paper presents Bagel, a statistical language generator which uses dynamic Bayesian networks to learn from semantically-aligned data produced by 42 untrained annotators. A human evaluation shows that Bagel can generate natural and informative utterances from unseen inputs in the information presentation domain. Additionally, generation performance on sparse datasets is improved significantly by using certainty-based active learning, yielding ratings close to the human gold standard with a fraction of the data.", "A challenging problem for spoken dialog systems is the design of utterance generation modules that are fast, flexible and general, yet produce high quality output in particular domains. A promising approach is trainable generation, which uses general-purpose linguistic knowledge automatically adapted to the application domain. This paper presents a trainable sentence planner for the MATCH dialog system. We show that trainable sentence planning can produce output comparable to that of MATCH's template-based generator even for quite complex information presentations.", "This paper explores the use of statistical machine translation (SMT) methods for tactical natural language generation. We present results on using phrase-based SMT for learning to map meaning representations to natural language. Improved results are obtained by inverting a semantic parser that uses SMT methods to map sentences into meaning representations. Finally, we show that hybridizing these two approaches results in still more accurate generation systems. Automatic and human evaluation of generated sentences are presented across two domains and four languages.", "Concept-to-text generation refers to the task of automatically producing textual output from non-linguistic input. We present a joint model that captures content selection (\"what to say\") and surface realization (\"how to say\") in an unsupervised domain-independent fashion. Rather than breaking up the generation process into a sequence of local decisions, we define a probabilistic context-free grammar that globally describes the inherent structure of the input (a corpus of database records and text describing some of them). We recast generation as the task of finding the best derivation tree for a set of database records and describe an algorithm for decoding in this framework that allows to intersect the grammar with additional information capturing fluency and syntactic well-formedness constraints. Experimental evaluation on several domains achieves results competitive with state-of-the-art systems that use domain specific constraints, explicit feature engineering or labeled data.", "Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences. In this paper, we present SPoT, a sentence planner, and a new methodology for automatically training SPoT on the basis of feedback provided by human judges. We reconceptualize the task into two distinct phases. First, a very simple, randomized sentence-plan-generator (SPG) generates a potentially large list of possible sentence plans for a given text-plan input. Second, the sentence-plan-ranker (SPR) ranks the list of output sentence plans, and then selects the top-ranked plan. The SPR uses ranking rules automatically learned from training data. We show that the trained SPR learns to select a sentence plan whose rating on average is only 5 worse than the top human-ranked sentence plan.", "We present a novel syntax-based natural language generation system that is trainable from unaligned pairs of input meaning representations and output sentences. It is divided into sentence planning, which incrementally builds deep-syntactic dependency trees, and surface realization. Sentence planner is based on A* search with a perceptron ranker that uses novel differing subtree updates and a simple future promise estimation; surface realization uses a rule-based pipeline from the Treex NLP toolkit. Our first results show that training from unaligned data is feasible, the outputs of our generator are mostly fluent and relevant." ] }
1606.05464
2437771934
Stance detection is the task of classifying the attitude expressed in a text towards a target such as Hillary Clinton to be "positive", negative" or "neutral". Previous work has assumed that either the target is mentioned in the text or that training data for every target is given. This paper considers the more challenging version of this task, where targets are not always mentioned and no training data is available for the test targets. We experiment with conditional LSTM encoding, which builds a representation of the tweet that is dependent on the target, and demonstrate that it outperforms encoding the tweet and the target independently. Performance is improved further when the conditional model is augmented with bidirectional encoding. We evaluate our approach on the SemEval 2016 Task 6 Twitter Stance Detection corpus achieving performance second best only to a system trained on semi-automatically labelled tweets for the test target. When such weak supervision is added, our approach achieves state-of-the-art results.
Stance Detection : Previous work mostly considered target-specific stance prediction in debates @cite_21 @cite_5 or student essays @cite_8 . The task considered in this paper is more challenging than stance detection in debates because, in addition to irregular language, the dataset is offered without any context, e.g., conversational structure or tweet metadata. The targets are also not always mentioned in the tweets, which is an additional challenge @cite_16 and distinguishes this task from target-dependent @cite_2 @cite_18 @cite_22 and open-domain target-dependent sentiment analysis @cite_0 @cite_1 . Related work on rumour stance detection either requires training data from the same rumour @cite_20 , i.e., target, or is rule-based @cite_19 and thus potentially hard to generalise. Finally, the target-dependent stance detection task tackled in this paper is different from that of , which while related concerned with the stance of a statement in natural language towards another statement.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_8", "@cite_21", "@cite_1", "@cite_0", "@cite_19", "@cite_2", "@cite_5", "@cite_16", "@cite_20" ], "mid": [ "2514722822", "2251644290", "1854537555", "", "2252007242", "2251900677", "2051405935", "2952340952", "", "", "2159981908" ], "abstract": [ "Targeted sentiment analysis classifies the sentiment polarity towards each target entity mention in given text documents. Seminal methods extract manual discrete features from automatic syntactic parse trees in order to capture semantic information of the enclosing sentence with respect to a target entity mention. Recently, it has been shown that competitive accuracies can be achieved without using syntactic parsers, which can be highly inaccurate on noisy text such as tweets. This is achieved by applying distributed word representations and rich neural pooling functions over a simple and intuitive segmentation of tweets according to target entity mentions. In this paper, we extend this idea by proposing a sentence-level neural model to address the limitation of pooling functions, which do not explicitly model tweet-level semantics. First, a bi-directional gated neural network is used to connect the words in a tweet so that pooling functions can be applied over the hidden layer instead of words for better representing the target and its contexts. Second, a three-way gated neural network structure is used to model the interaction between the target mention and its surrounding contexts. Experiments show that our proposed model gives significantly higher accuracies compared to the current best method for targeted sentiment analysis.", "Vector representations for language has been shown to be useful in a number of Natural Language Processing tasks. In this paper, we aim to investigate the effectiveness of word vector representations for the problem of Aspect Based Sentiment Analysis. In particular, we target three sub-tasks namely aspect term extraction, aspect category detection, and aspect sentiment prediction. We investigate the effectiveness of vector representations over different text data and evaluate the quality of domain-dependent vectors. We utilize vector representations to compute various vectorbased features and conduct extensive experiments to demonstrate their effectiveness. Using simple vector based features, we achieve F1 scores of 79.91 for aspect term extraction, 86.75 for category detection, and the accuracy 72.39 for aspect sentiment prediction.", "We present a new approach to the automated classification of document-level argument stance, a relatively under-researched sub-task of Sentiment Analysis. In place of the noisy online debate data currently used in stance classification research, a corpus of student essays annotated for essay-level stance is constructed for use in a series of classification experiments. A novel set of features designed to capture the stance, stance targets, and topical relationships between the essay prompt and the student's essay is described. Models trained on this feature set showed significant increases in accuracy relative to two high baselines.", "", "Open domain targeted sentiment is the joint information extraction task that finds target mentions together with the sentiment towards each mention from a text corpus. The task is typically modeled as a sequence labeling problem, and solved using state-of-the-art labelers such as CRF. We empirically study the effect of word embeddings and automatic feature combinations on the task by extending a CRF baseline using neural networks, which have demonstrated large potentials for sentiment analysis. Results show that the neural model can give better results by significantly increasing the recall. In addition, we propose a novel integration of neural and discrete features, which combines their relative advantages, leading to significantly higher results compared to both baselines.", "We propose a novel approach to sentiment analysis for a low resource setting. The intuition behind this work is that sentiment expressed towards an entity, targeted sentiment, may be viewed as a span of sentiment expressed across the entity. This representation allows us to model sentiment detection as a sequence tagging problem, jointly discovering people and organizations along with whether there is sentiment directed towards them. We compare performance in both Spanish and English on microblog data, using only a sentiment lexicon as an external resource. By leveraging linguisticallyinformed features within conditional random fields (CRFs) trained to minimize empirical risk, our best models in Spanish significantly outperform a strong baseline, and reach around 90 accuracy on the combined task of named entity recognition and sentiment prediction. Our models in English, trained on a much smaller dataset, are not yet statistically significant against their baselines.", "In this paper, we propose the first real time rumor debunking algorithm for Twitter. We use cues from 'wisdom of the crowds', that is, the aggregate 'common sense' and investigative journalism of Twitter users. We concentrate on identification of a rumor as an event that may comprise of one or more conflicting microblogs. We continue monitoring the rumor event and generate real time updates dynamically based on any additional information received. We show using real streaming data that it is possible, using our approach, to debunk rumors accurately and efficiently, often much faster than manual verification by professionals.", "In recent times, social media sites such as Twitter have been extensively used for debating politics and public policies. These debates span millions of tweets and numerous topics of public importance. Thus, it is imperative that this vast trove of data is tapped in order to gain insights into public opinion especially on hotly contested issues such as abortion, gun reforms etc. Thus, in our work, we aim to gauge users' stance on such topics in Twitter. We propose ReLP, a semi-supervised framework using a retweet-based label propagation algorithm coupled with a supervised classifier to identify users with differing opinions. In particular, our framework is designed such that it can be easily adopted to different domains with little human supervision while still producing excellent accuracy", "", "", "A rumor is commonly defined as a statement whose true value is unverifiable. Rumors may spread misinformation (false information) or disinformation (deliberately false information) on a network of people. Identifying rumors is crucial in online social media where large amounts of information are easily spread across a large network by sources with unverified authority. In this paper, we address the problem of rumor detection in microblogs and explore the effectiveness of 3 categories of features: content-based, network-based, and microblog-specific memes for correctly identifying rumors. Moreover, we show how these features are also effective in identifying disinformers, users who endorse a rumor and further help it to spread. We perform our experiments on more than 10,000 manually annotated tweets collected from Twitter and show how our retrieval model achieves more than 0.95 in Mean Average Precision (MAP). Finally, we believe that our dataset is the first large-scale dataset on rumor detection. It can open new dimensions in analyzing online misinformation and other aspects of microblog conversations." ] }
1606.05464
2437771934
Stance detection is the task of classifying the attitude expressed in a text towards a target such as Hillary Clinton to be "positive", negative" or "neutral". Previous work has assumed that either the target is mentioned in the text or that training data for every target is given. This paper considers the more challenging version of this task, where targets are not always mentioned and no training data is available for the test targets. We experiment with conditional LSTM encoding, which builds a representation of the tweet that is dependent on the target, and demonstrate that it outperforms encoding the tweet and the target independently. Performance is improved further when the conditional model is augmented with bidirectional encoding. We evaluate our approach on the SemEval 2016 Task 6 Twitter Stance Detection corpus achieving performance second best only to a system trained on semi-automatically labelled tweets for the test target. When such weak supervision is added, our approach achieves state-of-the-art results.
Conditional Encoding : Conditional encoding has been applied to the related task of recognising textual entailment @cite_14 , using a dataset of half a million training examples @cite_12 and numerous different hypotheses. Our experiments here show that conditional encoding is also successful on a relatively small training set and when applied to an unseen testing target. Moreover, we augment conditional encoding with bidirectional encoding and demonstrate the added benefit of unsupervised pre-training of word embeddings on unlabelled domain data.
{ "cite_N": [ "@cite_14", "@cite_12" ], "mid": [ "2118463056", "2953084091" ], "abstract": [ "While most approaches to automatically recognizing entailment relations have used classifiers employing hand engineered features derived from complex natural language processing pipelines, in practice their performance has been only slightly better than bag-of-word pair classifiers using only lexical similarity. The only attempt so far to build an end-to-end differentiable neural network for entailment failed to outperform such a simple similarity classifier. In this paper, we propose a neural model that reads two sentences to determine entailment using long short-term memory units. We extend this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases. Furthermore, we present a qualitative analysis of attention weights produced by this model, demonstrating such reasoning capabilities. On a large entailment dataset this model outperforms the previous best neural model and a classifier with engineered features by a substantial margin. It is the first generic end-to-end differentiable system that achieves state-of-the-art accuracy on a textual entailment dataset.", "Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time." ] }
1606.05608
2436227985
We derandomize G. Valiant's [J. ACM 62 (2015) Art. 13] subquadratic-time algorithm for finding outlier correlations in binary data. Our derandomized algorithm gives deterministic subquadratic scaling essentially for the same parameter range as Valiant's randomized algorithm, but the precise constants we save over quadratic scaling are more modest. Our main technical tool for derandomization is an explicit family of correlation amplifiers built via a family of zigzag-product expanders in Reingold, Vadhan, and Wigderson [Ann. of Math. 155 (2002) 157--187]. We say that a function @math is a correlation amplifier with threshold @math , error @math , and strength @math an even positive integer if for all pairs of vectors @math it holds that (i) @math implies @math ; and (ii) @math implies @math .
prob:main is a basic problem in data analysis and machine learning admitting many extensions, restrictions, and variants. A large body of work exists studying via techniques such as locality-sensitive hashing (e.g. @cite_26 @cite_31 @cite_17 @cite_4 @cite_14 @cite_30 @cite_25 ), with recent work aimed at derandomization (see Pagh @cite_33 and Pham and Pagh @cite_34 ) and resource tradeoffs (see Kapralov @cite_3 ) in particular. However, these techniques enable subquadratic scaling in @math only when @math is bounded from below by a positive constant, whereas the algorithm in thm:algorithm remains subquadratic even in the case of weak outliers when @math tends to zero with increasing @math , as long as @math and @math are separated. Ahle, Pagh, Razenshteyn, and Silvestri @cite_11 show that subquadratic scaling in @math is not possible for @math unless both the Orthogonal Vectors Conjecture and the Strong Exponential Time Hypothesis @cite_35 fail.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_14", "@cite_26", "@cite_4", "@cite_33", "@cite_11", "@cite_3", "@cite_31", "@cite_34", "@cite_25", "@cite_17" ], "mid": [ "2002359780", "1985572324", "1502916507", "", "2147717514", "2261895596", "", "2080844740", "", "2272255885", "1985289891", "2508919161" ], "abstract": [ "Given a metric space @math , @math , @math , and @math , a distribution over mappings @math is called a @math -sensitive hash family if any two points in @math at distance at most @math are mapped by @math to the same value with probability at least @math , and any two points at distance greater than @math are mapped by @math to the same value with probability at most @math . This notion was introduced by Indyk and Motwani in 1998 as the basis for an efficient approximate nearest neighbor search algorithm and has since been used extensively for this purpose. The performance of these algorithms is governed by the parameter @math , and constructing hash families with small @math automatically yields improved nearest neighbor algorithms. Here we show that for @math it is impossible to achieve @math . This almost matches the construction of Indyk and Motwani which achieves @math .", "The k-SAT problem is to determine if a given k-CNF has a satisfying assignment. It is a celebrated open question as to whether it requires exponential time to solve k-SAT for k?3. Here exponential time means 2?n for some ?>0. In this paper, assuming that, for k?3, k-SAT requires exponential time complexity, we show that the complexity of k-SAT increases as k increases. More precisely, for k?3, define sk=inf ?:there exists 2?n algorithm for solving k-SAT . Define ETH (Exponential-Time Hypothesis) for k-SAT as follows: for k?3, sk>0. In this paper, we show that sk is increasing infinitely often assuming ETH for k-SAT. Let s∞ be the limit of sk. We will in fact show that sk?(1?d k)s∞ for some constant d>0. We prove this result by bringing together the ideas of critical clauses and the Sparsification Lemma to reduce the satisfiability of a k-CNF to the satisfiability of a disjunction of 2?nk?-CNFs in fewer variables for some k??k and arbitrarily small ?>0. We also show that such a disjunction can be computed in time 2?n for arbitrarily small ?>0.", "The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50).", "", "We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers.", "We consider a new construction of locality-sensitive hash functions for Hamming space that is covering in the sense that is it guaranteed to produce a collision for every pair of vectors within a given radius r. The construction is efficient in the sense that the expected number of hash collisions between vectors at distance cr, for a given c > 1, comes close to that of the best possible data independent LSH without the covering guarantee, namely, the seminal LSH construction of Indyk and Motwani (FOCS '98). The efficiency of the new construction essentially matches their bound if cr = log(n) k, where n is the number of points in the data set and k ∈ N, and differs from it by at most a factor ln(4) in the exponent for general values of cr. As a consequence, LSH-based similarity search in Hamming space can avoid the problem of false negatives at little or no cost in efficiency.", "", "Locality Sensitive Hashing (LSH) has emerged as the method of choice for high dimensional similarity search, a classical problem of interest in numerous applications. LSH-based solutions require that each data point be inserted into a number A of hash tables, after which a query can be answered by performing B lookups. The original LSH solution of [IM98] showed for the first time that both A and B can be made sublinear in the number of data points. Unfortunately, the classical LSH solution does not provide any tradeoff between insert and query complexity, whereas for data (respectively, query) intensive applications one would like to minimize insert time by choosing a smaller @math (respectively, minimize query time by choosing a smaller B). A partial remedy for this is provided by Entropy LSH [Pan06], which allows to make either inserts or queries essentially constant time at the expense of a loss in the other parameter, but no algorithm that achieves a smooth tradeoff is known. In this paper, we present an algorithm for performing similarity search under the Euclidean metric that resolves the problem above. Our solution is inspired by Entropy LSH, but uses a very different analysis to achieve a smooth tradeoff between insert and query complexity. Our results improve upon or match, up to lower order terms in the exponent, best known data-oblivious algorithms for the Euclidean metric.", "", "Locality-sensitive hashing (LSH) has emerged as the dominant algorithmic technique for similarity search with strong performance guarantees in high-dimensional spaces. A drawback of traditional LSH schemes is that they may have , i.e., the recall is less than 100 . This limits the applicability of LSH in settings requiring precise performance guarantees. Building on the recent theoretical \"CoveringLSH\" construction that eliminates false negatives, we propose a fast and practical covering LSH scheme for Hamming space called . Inheriting the design benefits of CoveringLSH our method avoids false negatives and always reports all near neighbors. Compared to CoveringLSH we achieve an asymptotic improvement to the hash function computation time from @math to @math , where @math is the dimensionality of data and @math is the number of hash tables. Our experiments on synthetic and real-world data sets demonstrate that is comparable (and often superior) to traditional hashing-based approaches for search radius up to 20 in high-dimensional Hamming space.", "We study lower bounds for Locality-Sensitive Hashing (LSH) in the strongest setting: point sets in 0,1 d under the Hamming distance. Recall that H is said to be an (r, cr, p, q)-sensitive hash family if all pairs x, y ∈ 0,1 d with dist(x, y) ≤ r have probability at least p of collision under a randomly chosen h ∈ H, whereas all pairs x, y ∈ 0, 1 d with dist(x, y) ≥ cr have probability at most q of collision. Typically, one considers d → ∞, with c > 1 fixed and q bounded away from 0. For its applications to approximate nearest-neighbor search in high dimensions, the quality of an LSH family H is governed by how small its ρ parameter ρ = ln(1 p) ln(1 q) is as a function of the parameter c. The seminal paper of Indyk and Motwani [1998] showed that for each c ≥ 1, the extremely simple family H = x ↦ xi : i ∈ [d] achieves ρ ≤ 1 c. The only known lower bound, due to [2007], is that ρ must be at least ( e1 c - 1) (e1 c + 1) ≥ .46 c (minus od(1)). The contribution of this article is twofold. (1) We show the “optimal” lower bound for ρ: it must be at least 1 c (minus od(1)). Our proof is very simple, following almost immediately from the observation that the noise stability of a boolean function at time t is a log-convex function of t. (2) We raise and discuss the following issue: neither the application of LSH to nearest-neighbor search nor the known LSH lower bounds hold as stated if the q parameter is tiny. Here, “tiny” means q = 2-Θ(d), a parameter range we believe is natural.", "[See the paper for the full abstract.] We show tight upper and lower bounds for time-space trade-offs for the @math -Approximate Near Neighbor Search problem. For the @math -dimensional Euclidean space and @math -point datasets, we develop a data structure with space @math and query time @math for every @math such that: This is the first data structure that achieves sublinear query time and near-linear space for every approximation factor @math , improving upon [Kapralov, PODS 2015]. The data structure is a culmination of a long line of work on the problem for all space regimes; it builds on Spherical Locality-Sensitive Filtering [Becker, Ducas, Gama, Laarhoven, SODA 2016] and data-dependent hashing [Andoni, Indyk, Nguyen, Razenshteyn, SODA 2014] [Andoni, Razenshteyn, STOC 2015]. Our matching lower bounds are of two types: conditional and unconditional. First, we prove tightness of the whole above trade-off in a restricted model of computation, which captures all known hashing-based approaches. We then show unconditional cell-probe lower bounds for one and two probes that match the above trade-off for @math , improving upon the best known lower bounds from [Panigrahy, Talwar, Wieder, FOCS 2010]. In particular, this is the first space lower bound (for any static data structure) for two probes which is not polynomially smaller than the one-probe bound. To show the result for two probes, we establish and exploit a connection to locally-decodable codes." ] }
1606.05608
2436227985
We derandomize G. Valiant's [J. ACM 62 (2015) Art. 13] subquadratic-time algorithm for finding outlier correlations in binary data. Our derandomized algorithm gives deterministic subquadratic scaling essentially for the same parameter range as Valiant's randomized algorithm, but the precise constants we save over quadratic scaling are more modest. Our main technical tool for derandomization is an explicit family of correlation amplifiers built via a family of zigzag-product expanders in Reingold, Vadhan, and Wigderson [Ann. of Math. 155 (2002) 157--187]. We say that a function @math is a correlation amplifier with threshold @math , error @math , and strength @math an even positive integer if for all pairs of vectors @math it holds that (i) @math implies @math ; and (ii) @math implies @math .
In small dimensions, Alman and Williams @cite_28 present a randomized algorithm that finds Hamming-near neighbours in a batch-query setting analogous to prob:main in subquadratic time in @math when the dimension is constrained to @math . Recently, Chan and Williams @cite_9 show how to derandomize related algorithm designs; also, Alman, Chan and Williams @cite_10 derandomize the probabilistic polynomials for symmetric Boolean functions used in @cite_28 , achieving deterministic subquadratic batch queries in small dimensions.
{ "cite_N": [ "@cite_28", "@cite_9", "@cite_10" ], "mid": [ "1430582609", "2263246245", "2507428467" ], "abstract": [ "We show how to compute any symmetric Boolean function on n variables over any field (as well as the integers) with a probabilistic polynomial of degree O(a#x221A;n log(1 a#x03B5;))) and error at most a#x03B5;. The degree dependence on n and a#x03B5; is optimal, matching a lower bound of Razborov (1987) and Smolensky (1987) for the MAJORITY function. The proof is constructive: a low-degree polynomial can be efficiently sampled from the distribution. This polynomial construction is combined with other algebraic ideas to give the first sub quadratic time algorithm for computing a (worst-case) batch of Hamming distances in super logarithmic dimensions, exactly. To illustrate, let c(n): N -> N. Suppose we are given a database D of n vectors in 0, 1 (c(n) log n) and a collection of n query vectors Q in the same dimension. For all u in Q, we wish to compute a v in D with minimum Hamming distance from u. We solve this problem in n(2-1 O(c(n) log2 c(n))) randomized time. Hence, the problem is in \"truly sub quadratic\" time for O(log n) dimensions, and in sub quadratic time for d = o((log2 n) (log log n)2). We apply the algorithm to computing pairs with maximum inner product, closest pair in l1 for vectors with bounded integer entries, and pairs with maximum Jaccard coefficients.", "We show how to solve all-pairs shortest paths on n nodes in deterministic n3 2Ω([EQUATION]) time, and how to count the pairs of orthogonal vectors among n 0-1 vectors in d = clogn dimensions in deterministic n2−1 O(logc) time. These running times essentially match the best known randomized algorithms of (Williams, STOC'14) and (Abboud, Williams, and Yu, SODA 2015) respectively, and the ability to count was open even for randomized algorithms. By reductions, these two results yield faster deterministic algorithms for many other problems. Our techniques can also be used to deterministically count k-SAT assignments on n variable formulas in 2n--n O(k) time, roughly matching the best known running times for detecting satisfiability and resolving an open problem of Santhanam (2013). A key to our constructions is an efficient way to deterministically simulate certain probabilistic polynomials critical to the algorithms of prior work, carefully applying small-biased sets and modulus-amplifying polynomials.", "We design new polynomials for representing threshold functions in three different regimes: probabilistic polynomials of low degree, which need far less randomness than previous constructions, polynomial threshold functions (PTFs) with \"nice\" threshold behavior and degree almost as low as the probabilistic polynomials, and a new notion of probabilistic PTFs where we combine the above techniques to achieve even lower degree with similar \"nice\" threshold behavior. Utilizing these polynomial constructions, we design faster algorithms for a variety of problems: @math Offline Hamming Nearest (and Furthest) Neighbors: Given @math red and @math blue points in @math -dimensional Hamming space for @math , we can find an (exact) nearest (or furthest) blue neighbor for every red point in randomized time @math or deterministic time @math . These also lead to faster MAX-SAT algorithms for sparse CNFs. @math Offline Approximate Nearest (and Furthest) Neighbors: Given @math red and @math blue points in @math -dimensional @math or Euclidean space, we can find a @math -approximate nearest (or furthest) blue neighbor for each red point in randomized time near @math . @math SAT Algorithms and Lower Bounds for Circuits With Linear Threshold Functions: We give a satisfiability algorithm for @math circuits with a subquadratic number of linear threshold gates on the bottom layer, and a subexponential number of gates on the other layers, that runs in deterministic @math time. This also implies new circuit lower bounds for threshold circuits. We also give a randomized @math -time SAT algorithm for subexponential-size @math circuits, where the top @math gate and middle @math gates have @math fan-in." ] }
1606.05608
2436227985
We derandomize G. Valiant's [J. ACM 62 (2015) Art. 13] subquadratic-time algorithm for finding outlier correlations in binary data. Our derandomized algorithm gives deterministic subquadratic scaling essentially for the same parameter range as Valiant's randomized algorithm, but the precise constants we save over quadratic scaling are more modest. Our main technical tool for derandomization is an explicit family of correlation amplifiers built via a family of zigzag-product expanders in Reingold, Vadhan, and Wigderson [Ann. of Math. 155 (2002) 157--187]. We say that a function @math is a correlation amplifier with threshold @math , error @math , and strength @math an even positive integer if for all pairs of vectors @math it holds that (i) @math implies @math ; and (ii) @math implies @math .
Paturi, Rajasekaran, and Reif @cite_22 , Dubiner @cite_6 , and May and Ozerov @cite_20 present randomized algorithms that can be used to solve almost all instances of the light bulb problem in subquadratic time if we assume that @math is bounded from below by a positive constant; if @math tends to zero these algorithms converge to quadratic running time in @math .
{ "cite_N": [ "@cite_20", "@cite_22", "@cite_6" ], "mid": [ "566315627", "2044809659", "2123485784" ], "abstract": [ "We propose a new decoding algorithm for random binary linear codes. The so-called information set decoding algorithm of Prange (1962) achieves worst-case complexity (2^ 0.121n ). In the late 80s, Stern proposed a sort-and-match version for Prange’s algorithm, on which all variants of the currently best known decoding algorithms are build. The fastest algorithm of Becker, Joux, May and Meurer (2012) achieves running time (2^ 0.102n ) in the full distance decoding setting and (2^ 0.0494n ) with half (bounded) distance decoding.", "In this paper, we consider the problem of correlational learning and present algorithms to determine correlated objects.", "The problem of finding high-dimensional approximate nearest neighbors is considered when the data is generated by some known probabilistic model. A large natural class of algorithms (bucketing codes) is investigated, Bucketing information is defined, and is proven to bound the performance of all bucketing codes. The bucketing information bound is asymptotically attained by some randomly constructed bucketing codes. The example of n Bernoulli(1 2) very long (length d → ∞) sequences of bits is singled out. It is assumed that n - 2m sequences are completely independent, while the remaining 2m sequences are composed of m dependent pairs. The interdependence within each pair is that their bits agree with probability 1 2 0. A specific 2-D inequality (proven in another paper) implies that the exponent 1 p cannot be lowered. Moreover, if one sequence out of each pair belongs to a known set of n(2p-1)2 sequences, pairing can be done using order n1+∈ comparisons!" ] }
1606.05608
2436227985
We derandomize G. Valiant's [J. ACM 62 (2015) Art. 13] subquadratic-time algorithm for finding outlier correlations in binary data. Our derandomized algorithm gives deterministic subquadratic scaling essentially for the same parameter range as Valiant's randomized algorithm, but the precise constants we save over quadratic scaling are more modest. Our main technical tool for derandomization is an explicit family of correlation amplifiers built via a family of zigzag-product expanders in Reingold, Vadhan, and Wigderson [Ann. of Math. 155 (2002) 157--187]. We say that a function @math is a correlation amplifier with threshold @math , error @math , and strength @math an even positive integer if for all pairs of vectors @math it holds that (i) @math implies @math ; and (ii) @math implies @math .
G. Valiant @cite_27 showed that a randomized algorithm can identify the planted correlation in subquadratic time on almost all inputs even when @math tends to zero as @math increases. As a corollary of thm:algorithm , we can derandomize Valiant's design and still retain subquadratic running time (but with a worse constant) for almost all inputs, except for extremely weak planted correlations with @math that our amplifier is not in general able to amplify with sufficiently low output dimension to enable an overall subquadratic running time.
{ "cite_N": [ "@cite_27" ], "mid": [ "2263882035" ], "abstract": [ "Given a set of n d-dimensional Boolean vectors with the promise that the vectors are chosen uniformly at random with the exception of two vectors that have Pearson correlation coefficient ρ (Hamming distance dċ 1−ρf2), how quickly can one find the two correlated vectorsq We present an algorithm which, for any constant e>0, and constant ρ>0, runs in expected time O(n5mωf4mωpe pnd) Applications and extensions of this basic algorithm yield significantly improved algorithms for several other problems. Approximate Closest Pair. For any sufficiently small constant e>0, given n d-dimensional vectors, there exists an algorithm that returns a pair of vectors whose Euclidean (or Hamming) distance differs from that of the closest pair by a factor of at most 1pe, and runs in time O(n2mΘ(se)). The best previous algorithms (including Locality Sensitive Hashing) have runtime O(n2mO(e)). Learning Sparse Parities with Noise. Given samples from an instance of the learning parities with noise problem where each example has length n, the true parity set has size at most k « n, and the noise rate is η, there exists an algorithm that identifies the set of k indices in time nωpef3 k poly(1f1m2η) 0.4), improves upon the results of [2011] that give a runtime of n(1p(2 η)2 p o(1))kf2 poly(1f1m2η). Learning k-Juntas with Noise. Given uniformly random length n Boolean vectors, together with a label, which is some function of just k « n of the bits, perturbed by noise rate η, return the set of relevant indices. Leveraging the reduction of [2009], our result for learning k-parities implies an algorithm for this problem with runtime nωpef3 k poly(1f1m2η) Learning k-Juntas without Noise. Given uniformly random length n Boolean vectors, together with a label, which is some function of k « n of the bits, return the set of relevant indices. Using a modification of the algorithm of [2004], and employing our algorithm for learning sparse parities with noise via the reduction of [2009], we obtain an algorithm for this problem with runtime nωp ef4 k poly(n)" ] }
1606.05608
2436227985
We derandomize G. Valiant's [J. ACM 62 (2015) Art. 13] subquadratic-time algorithm for finding outlier correlations in binary data. Our derandomized algorithm gives deterministic subquadratic scaling essentially for the same parameter range as Valiant's randomized algorithm, but the precise constants we save over quadratic scaling are more modest. Our main technical tool for derandomization is an explicit family of correlation amplifiers built via a family of zigzag-product expanders in Reingold, Vadhan, and Wigderson [Ann. of Math. 155 (2002) 157--187]. We say that a function @math is a correlation amplifier with threshold @math , error @math , and strength @math an even positive integer if for all pairs of vectors @math it holds that (i) @math implies @math ; and (ii) @math implies @math .
cor:lightbulb extends to parity functions of larger (constant) weight in the presence of noise (cf. @cite_24 @cite_15 @cite_27 ). This generalized version of the problem is as follows.
{ "cite_N": [ "@cite_24", "@cite_27", "@cite_15" ], "mid": [ "1915272284", "2263882035", "2268968630" ], "abstract": [ "We consider the problem of learning sparse parities in the presence of noise. For learning parities on r out of n variables, we give an algorithm that runs in time poly (log 1 δ, 1 1-2η)n(1+(2η)2+o(1))r 2 and uses only r log(n δ)ω(1) (1-2η)2 samples in the random noise setting under the uniform distribution, where η is the noise rate and δ is the confidence parameter. From previously known results this algorithm also works for adversarial noise and generalizes to arbitrary distributions. Even though efficient algorithms for learning sparse parities in the presence of noise would have major implications to learning other hypothesis classes, our work is the first to give a bound better than the brute-force O(nr). As a consequence, we obtain the first nontrivial bound for learning r-juntas in the presence of noise, and also a small improvement in the complexity of learning DNF, under the uniform distribution.", "Given a set of n d-dimensional Boolean vectors with the promise that the vectors are chosen uniformly at random with the exception of two vectors that have Pearson correlation coefficient ρ (Hamming distance dċ 1−ρf2), how quickly can one find the two correlated vectorsq We present an algorithm which, for any constant e>0, and constant ρ>0, runs in expected time O(n5mωf4mωpe pnd) Applications and extensions of this basic algorithm yield significantly improved algorithms for several other problems. Approximate Closest Pair. For any sufficiently small constant e>0, given n d-dimensional vectors, there exists an algorithm that returns a pair of vectors whose Euclidean (or Hamming) distance differs from that of the closest pair by a factor of at most 1pe, and runs in time O(n2mΘ(se)). The best previous algorithms (including Locality Sensitive Hashing) have runtime O(n2mO(e)). Learning Sparse Parities with Noise. Given samples from an instance of the learning parities with noise problem where each example has length n, the true parity set has size at most k « n, and the noise rate is η, there exists an algorithm that identifies the set of k indices in time nωpef3 k poly(1f1m2η) 0.4), improves upon the results of [2011] that give a runtime of n(1p(2 η)2 p o(1))kf2 poly(1f1m2η). Learning k-Juntas with Noise. Given uniformly random length n Boolean vectors, together with a label, which is some function of just k « n of the bits, perturbed by noise rate η, return the set of relevant indices. Leveraging the reduction of [2009], our result for learning k-parities implies an algorithm for this problem with runtime nωpef3 k poly(1f1m2η) Learning k-Juntas without Noise. Given uniformly random length n Boolean vectors, together with a label, which is some function of k « n of the bits, return the set of relevant indices. Using a modification of the algorithm of [2004], and employing our algorithm for learning sparse parities with noise via the reduction of [2009], we obtain an algorithm for this problem with runtime nωp ef4 k poly(n)", "We study the problem of detecting outlier pairs of strongly correlated variables among a collection of n variables with otherwise weak pairwise correlations. After normalization, this task amounts to the geometric task where we are given as input a set of n vectors with unit Euclidean norm and dimension d, and we are asked to find all the outlier pairs of vectors whose inner product is at least ρ in absolute value, subject to the promise that all but at most q pairs of vectors have inner product at most τ in absolute value for some constants 0 Improving on an algorithm of G. Valiant [FOCS 2012; J. ACM 2015], we present a randomized algorithm that for Boolean inputs ( --1, 1 -valued data normalized to unit Euclidean length) runs in time [EQUATION] where 0 [EQUATION] and in time [EQUATION] where 2 ≤ ω" ] }
1606.05608
2436227985
We derandomize G. Valiant's [J. ACM 62 (2015) Art. 13] subquadratic-time algorithm for finding outlier correlations in binary data. Our derandomized algorithm gives deterministic subquadratic scaling essentially for the same parameter range as Valiant's randomized algorithm, but the precise constants we save over quadratic scaling are more modest. Our main technical tool for derandomization is an explicit family of correlation amplifiers built via a family of zigzag-product expanders in Reingold, Vadhan, and Wigderson [Ann. of Math. 155 (2002) 157--187]. We say that a function @math is a correlation amplifier with threshold @math , error @math , and strength @math an even positive integer if for all pairs of vectors @math it holds that (i) @math implies @math ; and (ii) @math implies @math .
With no information on @math , the trivial solution is to enumerate all @math subsets of @math to locate the support @math . Blum, Kalai, and Wasserman @cite_23 provide a non-trivial solution which runs in time and sample complexity @math for any positive integers @math with @math ; this is @math when @math is a constant independent of @math . If we assert that @math is a constant independent of @math , the trivial complexity drops from exponential to @math , and non-trivial speed-ups seek to lower the coefficient @math of @math in the exponent. Randomized solutions for constant @math include Valiant's breakthrough algorithm @cite_27 and our subsequent randomized improvement @cite_15 which runs in time @math for any constant @math .
{ "cite_N": [ "@cite_27", "@cite_23", "@cite_15" ], "mid": [ "2263882035", "2065151783", "2268968630" ], "abstract": [ "Given a set of n d-dimensional Boolean vectors with the promise that the vectors are chosen uniformly at random with the exception of two vectors that have Pearson correlation coefficient ρ (Hamming distance dċ 1−ρf2), how quickly can one find the two correlated vectorsq We present an algorithm which, for any constant e>0, and constant ρ>0, runs in expected time O(n5mωf4mωpe pnd) Applications and extensions of this basic algorithm yield significantly improved algorithms for several other problems. Approximate Closest Pair. For any sufficiently small constant e>0, given n d-dimensional vectors, there exists an algorithm that returns a pair of vectors whose Euclidean (or Hamming) distance differs from that of the closest pair by a factor of at most 1pe, and runs in time O(n2mΘ(se)). The best previous algorithms (including Locality Sensitive Hashing) have runtime O(n2mO(e)). Learning Sparse Parities with Noise. Given samples from an instance of the learning parities with noise problem where each example has length n, the true parity set has size at most k « n, and the noise rate is η, there exists an algorithm that identifies the set of k indices in time nωpef3 k poly(1f1m2η) 0.4), improves upon the results of [2011] that give a runtime of n(1p(2 η)2 p o(1))kf2 poly(1f1m2η). Learning k-Juntas with Noise. Given uniformly random length n Boolean vectors, together with a label, which is some function of just k « n of the bits, perturbed by noise rate η, return the set of relevant indices. Leveraging the reduction of [2009], our result for learning k-parities implies an algorithm for this problem with runtime nωpef3 k poly(1f1m2η) Learning k-Juntas without Noise. Given uniformly random length n Boolean vectors, together with a label, which is some function of k « n of the bits, return the set of relevant indices. Using a modification of the algorithm of [2004], and employing our algorithm for learning sparse parities with noise via the reduction of [2009], we obtain an algorithm for this problem with runtime nωp ef4 k poly(n)", "We describe a slightly subexponential time algorithm for learning parity functions in the presence of random classification noise, a problem closely related to several cryptographic and coding problems. Our algorithm runs in polynomial time for the case of parity functions that depend on only the first O(log n log log n) bits of input, which provides the first known instance of an efficient noise-tolerant algorithm for a concept class that is not learnable in the Statistical Query model of Kearns [1998]. Thus, we demonstrate that the set of problems learnable in the statistical query model is a strict subset of those problems learnable in the presence of noise in the PAC model.In coding-theory terms, what we give is a poly(n)-time algorithm for decoding linear k × n codes in the presence of random noise for the case of k = c log n log log n for some c > 0. (The case of k = O(log n) is trivial since one can just individually check each of the 2k possible messages and choose the one that yields the closest codeword.)A natural extension of the statistical query model is to allow queries about statistical properties that involve t-tuples of examples, as opposed to just single examples. The second result of this article is to show that any class of functions learnable (strongly or weakly) with t-wise queries for t = O(log n) is also weakly learnable with standard unary queries. Hence, this natural extension to the statistical query model does not increase the set of weakly learnable functions.", "We study the problem of detecting outlier pairs of strongly correlated variables among a collection of n variables with otherwise weak pairwise correlations. After normalization, this task amounts to the geometric task where we are given as input a set of n vectors with unit Euclidean norm and dimension d, and we are asked to find all the outlier pairs of vectors whose inner product is at least ρ in absolute value, subject to the promise that all but at most q pairs of vectors have inner product at most τ in absolute value for some constants 0 Improving on an algorithm of G. Valiant [FOCS 2012; J. ACM 2015], we present a randomized algorithm that for Boolean inputs ( --1, 1 -valued data normalized to unit Euclidean length) runs in time [EQUATION] where 0 [EQUATION] and in time [EQUATION] where 2 ≤ ω" ] }
1606.05608
2436227985
We derandomize G. Valiant's [J. ACM 62 (2015) Art. 13] subquadratic-time algorithm for finding outlier correlations in binary data. Our derandomized algorithm gives deterministic subquadratic scaling essentially for the same parameter range as Valiant's randomized algorithm, but the precise constants we save over quadratic scaling are more modest. Our main technical tool for derandomization is an explicit family of correlation amplifiers built via a family of zigzag-product expanders in Reingold, Vadhan, and Wigderson [Ann. of Math. 155 (2002) 157--187]. We say that a function @math is a correlation amplifier with threshold @math , error @math , and strength @math an even positive integer if for all pairs of vectors @math it holds that (i) @math implies @math ; and (ii) @math implies @math .
Algorithms for learning parity functions enable extensions to further classes of Boolean functions such as sparse juntas and DNFs (cf. @cite_7 @cite_32 @cite_27 ).
{ "cite_N": [ "@cite_27", "@cite_32", "@cite_7" ], "mid": [ "2263882035", "2063003848", "" ], "abstract": [ "Given a set of n d-dimensional Boolean vectors with the promise that the vectors are chosen uniformly at random with the exception of two vectors that have Pearson correlation coefficient ρ (Hamming distance dċ 1−ρf2), how quickly can one find the two correlated vectorsq We present an algorithm which, for any constant e>0, and constant ρ>0, runs in expected time O(n5mωf4mωpe pnd) Applications and extensions of this basic algorithm yield significantly improved algorithms for several other problems. Approximate Closest Pair. For any sufficiently small constant e>0, given n d-dimensional vectors, there exists an algorithm that returns a pair of vectors whose Euclidean (or Hamming) distance differs from that of the closest pair by a factor of at most 1pe, and runs in time O(n2mΘ(se)). The best previous algorithms (including Locality Sensitive Hashing) have runtime O(n2mO(e)). Learning Sparse Parities with Noise. Given samples from an instance of the learning parities with noise problem where each example has length n, the true parity set has size at most k « n, and the noise rate is η, there exists an algorithm that identifies the set of k indices in time nωpef3 k poly(1f1m2η) 0.4), improves upon the results of [2011] that give a runtime of n(1p(2 η)2 p o(1))kf2 poly(1f1m2η). Learning k-Juntas with Noise. Given uniformly random length n Boolean vectors, together with a label, which is some function of just k « n of the bits, perturbed by noise rate η, return the set of relevant indices. Leveraging the reduction of [2009], our result for learning k-parities implies an algorithm for this problem with runtime nωpef3 k poly(1f1m2η) Learning k-Juntas without Noise. Given uniformly random length n Boolean vectors, together with a label, which is some function of k « n of the bits, return the set of relevant indices. Using a modification of the algorithm of [2004], and employing our algorithm for learning sparse parities with noise via the reduction of [2009], we obtain an algorithm for this problem with runtime nωp ef4 k poly(n)", "We consider a fundamental problem in computational learning theory: learning an arbitrary Boolean function that depends on an unknown set of k out of n Boolean variables. We give an algorithm for learning such functions from uniform random examples that runs in time roughly (nk)ω ω+1, where ω < 2.376 is the matrix multiplication exponent. We thus obtain the first-polynomial factor improvement on the naive nk time bound which can be achieved via exhaustive search. Our algorithm and analysis exploit new structural properties of Boolean functions.", "" ] }
1606.05282
2439620876
Caching at mobile devices can facilitate device-to-device (D2D) communications, which may significantly improve spectrum efficiency and alleviate the heavy burden on backhaul links. However, most previous works ignored user mobility, thus having limited practical applications. In this paper, we take advantage of the user mobility pattern by the inter-contact times between different users, and propose a mobility-aware caching placement strategy to maximize the data offloading ratio, which is defined as the percentage of the requested data that can be delivered via D2D links rather than through base stations (BSs). Given the NP-hard caching placement problem, we first propose an optimal dynamic programming (DP) algorithm to obtain a performance benchmark, but with much lower complexity than exhaustive search. We will then prove that the problem falls in the category of monotone submodular maximization over a matroid constraint, and propose a time-efficient greedy algorithm, which achieves an approximation ratio as 1 2. Simulation results with real-life data-sets will validate the effectiveness of our proposed mobility-aware caching placement strategy. We observe that users moving at either a very low or very high speed should cache the most popular files, while users moving at a medium speed should cache less popular files to avoid duplication.
Compared with caching at BSs @cite_11 @cite_35 @cite_47 @cite_28 @cite_2 @cite_32 @cite_26 @cite_49 , caching at mobile devices provides new features and unique advantages. First, the aggregate caching capacity grows with the number of devices in the network, and thus the benefit of device caching improves as the network size increases @cite_37 . Second, by caching popular content at devices, mobile users may acquire required files from close user devices via D2D communications, rather than through the BS @cite_5 @cite_39 . This will significantly reduce the mobile traffic on the backbone network and alleviate the heavy burden on the backhaul links. However, mobile caching also faces some new challenges. Specifically, not only the users, but also the caching helpers are moving over time, which brings additional difficulties in the caching design.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_26", "@cite_28", "@cite_32", "@cite_39", "@cite_49", "@cite_2", "@cite_5", "@cite_47", "@cite_11" ], "mid": [ "1972738071", "1611674276", "1960633731", "2122186362", "2490965583", "2068705434", "2038519345", "2008076367", "", "2949071729", "2952684528" ], "abstract": [ "Video on-demand streaming from Internet-based servers is becoming one of the most important services offered by wireless networks today. In order to improve the area spectral efficiency of video transmission in cellular systems, small cells heterogeneous architectures (e.g., femtocells, WiFi off-loading) are being proposed, such that video traffic to nomadic users can be handled by short-range links to the nearest small cell access points (referred to as “helpers”). As the helper deployment density increases, the backhaul capacity becomes the system bottleneck. In order to alleviate such bottleneck we propose a system where helpers with low-rate backhaul but high storage capacity cache popular video files. Files not available from helpers are transmitted by the cellular base station. We analyze the optimum way of assigning files to the helpers, in order to minimize the expected downloading time for files. We distinguish between the uncoded case (where only complete files are stored) and the coded case, where segments of Fountain-encoded versions of the video files are stored at helpers. We show that the uncoded optimum file assignment is NP-hard, and develop a greedy strategy that is provably within a factor 2 of the optimum. Further, for a special case we provide an efficient algorithm achieving a provably better approximation ratio of 1-(1-1 d )d, where d is the maximum number of helpers a user can be connected to. We also show that the coded optimum cache assignment problem is convex that can be further reduced to a linear program. We present numerical results comparing the proposed schemes.", "As wireless video is the fastest growing form of data traffic, methods for spectrally efficient on-demand wireless video streaming are essential to both service providers and users. A key property of video on-demand is the asynchronous content reuse , such that a few popular files account for a large part of the traffic but are viewed by users at different times. Caching of content on wireless devices in conjunction with device-to-device (D2D) communications allows to exploit this property, and provide a network throughput that is significantly in excess of both the conventional approach of unicasting from cellular base stations and the traditional D2D networks for “regular” data traffic. This paper presents in a tutorial and concise form some recent results on the throughput scaling laws of wireless networks with caching and asynchronous content reuse, contrasting the D2D approach with other alternative approaches such as conventional unicasting, harmonic broadcasting , and a novel coded multicasting approach based on caching in the user devices and network-coded transmission from the cellular base station only. Somehow surprisingly, the D2D scheme with spatial reuse and simple decentralized random caching achieves the same near-optimal throughput scaling law as coded multicasting. Both schemes achieve an unbounded throughput gain (in terms of scaling law) with respect to conventional unicasting and harmonic broadcasting, in the relevant regime where the number of video files in the library is smaller than the total size of the distributed cache capacity in the network. To better understand the relative merits of these competing approaches, we consider a holistic D2D system design incorporating traditional microwave (2 GHz) and millimeter-wave (mm-wave) D2D links; the direct connections to the base station can be used to provide those rare video requests that cannot be found in local caches. We provide extensive simulation results under a variety of system settings and compare our scheme with the systems that exploit transmission from the base station only. We show that, also in realistic conditions and nonasymptotic regimes, the proposed D2D approach offers very significant throughput gains.", "Heterogeneous cellular networks (HCNs) with embedded small cells are considered, where multiple mobile users wish to download network content of different popularity. By caching data into the small-cell base stations, we will design distributed caching optimization algorithms via belief propagation (BP) for minimizing the downloading latency. First, we derive the delay-minimization objective function and formulate an optimization problem. Then, we develop a framework for modeling the underlying HCN topology with the aid of a factor graph. Furthermore, a distributed BP algorithm is proposed based on the network's factor graph. Next, we prove that a fixed point of convergence exists for our distributed BP algorithm. In order to reduce the complexity of the BP, we propose a heuristic BP algorithm. Furthermore, we evaluate the average downloading performance of our HCN for different numbers and locations of the base stations and mobile users, with the aid of stochastic geometry theory. By modeling the nodes distributions using a Poisson point process, we develop the expressions of the average factor graph degree distribution, as well as an upper bound of the outage probability for random caching schemes. We also improve the performance of random caching. Our simulations show that 1) the proposed distributed BP algorithm has a near-optimal delay performance, approaching that of the high-complexity exhaustive search method; 2) the modified BP offers a good delay performance at low communication complexity; 3) both the average degree distribution and the outage upper bound analysis relying on stochastic geometry match well with our Monte-Carlo simulations; and 4) the optimization based on the upper bound provides both a better outage and a better delay performance than the benchmarks.", "We consider the problem of caching in next generation mobile cellular networks where small base stations (SBSs) are able to store their users' content and serve them accordingly. The SBSs are stochastically distributed over the plane and serve their users either from the local cache or internet via limited backhaul, depending on the availability of requested content. We model and characterize the outage probability and average content delivery rate as a function of the signal-to-interference-ratio (SINR), base station intensity, target file bitrate, storage size and file popularity. Our results provide key insights into the problem of cache-enabled small cell networks.", "Caching popular contents at the edge of wireless networks has recently emerged as a promising technology to improve the quality of service for mobile users, while balancing the peak-to-average transmissions over backhaul links. In contrast to existing works, where a central coordinator is required to design the cache placement strategy, we consider a distributed caching problem which is highly relevant in dense network settings. In the considered scenario, each Base Station (BS) has a cache storage of finite capacity, and each user will be served by one or multiple BSs depending on the employed transmission scheme. A belief propagation based distributed algorithm is proposed to solve the cache placement problem, where the parallel computations are performed by individual BSs based on limited local information and very few messages passed between neighboring BSs. Thus, no central coordinator is required to collect the information of the whole network, which significantly saves signaling overhead. Simulation results show that the proposed low-complexity distributed algorithm can greatly reduce the average download delay by collaborative caching and transmissions.", "In this paper, we study the performance of Device-to-Device (D2D) communications with dynamic interference. In specific, we analyze the performance of frequency reuse among D2D links with dynamic data arrival setting. We first consider the arrival and departure processes of packets in a non-saturated buffer, which result in varying interference on a link based on the change of its backlogged state. The packet-level system behavior is then represented by a coupled processor queuing model, where the service rate varies with time due to both the fast fading and the dynamic interference effects. In order to analyze the queuing model, we formulate it as a Discrete Time Markov Chain (DTMC) and compute its steady-state distribution. Since the state space of the DTMC grows exponentially with the number of D2D links, we use the model decomposition and some iteration techniques in Stochastic Petri Nets (SPNs) to derive its approximate steady state solution, which is used to obtain the approximate performance metrics of the D2D communications in terms of average queue length, mean throughput, average packet delay and packet dropping probability of each link. Simulations are performed to verify the analytical results under different traffic loads and interference conditions.", "We propose a novel MIMO cooperation framework called the cache-induced opportunistic cooperative MIMO (Coop-MIMO) for video streaming in backhaul limited multicell MIMO networks. By caching a portion of the video files, the base stations (BSs) opportunistically employ Coop-MIMO transmission to achieve MIMO cooperation gain without expensive payload backhaul. We derive closed form expressions of various video streaming performance metrics for cache-induced opportunistic Coop-MIMO and investigate the impact of BS level caching and key system parameters on the performance. Specifically, we first obtain a mixed fluid-diffusion limit for the playback buffer queueing system. Then we derive the approximated video streaming performance using the mixed fluid-diffusion limit. Based on the performance analysis, we formulate the joint optimization of cache control and playback buffer management as a stochastic optimization problem. Then we derive a closed form solution for the playback buffer thresholds and develop a stochastic subgradient algorithm to find the optimal cache control. The analysis shows that the video streaming performance improves linearly with the BS cache size BC, the transmit power cost decreases exponentially with BC, and the backhaul cost decreases linearly with BC.", "We suggest a novel approach to handle the ongoing explosive increase in the demand for video content in mobile devices. We envision femtocell-like base stations, which we call helpers, with weak backhaul links but large storage capabilities. These helpers form a wireless distributed caching network that assists the macro base station by handling requests of popular files that have been cached. We formalize the wireless distributed caching optimization problem for the case that files are encoded using fountain MDS codes. We express the problem as a convex optimization. By adding additional variables we reduce it to a linear program. On the practical side, we present a detailed simulation of a university campus scenario covered by a single 3GPP LTE R8 cell and several helper nodes using a simplified 802.11n protocol. We use a real campus trace of video requests and show how distributed caching can increase the number of served users by as much as 600–700 .", "", "Video on-demand streaming from Internet-based servers is becoming one of the most important services offered by wireless networks today. In order to improve the area spectral efficiency of video transmission in cellular systems, small cells heterogeneous architectures (e.g., femtocells, WiFi off-loading) are being proposed, such that video traffic to nomadic users can be handled by short-range links to the nearest small cell access points (referred to as \"helpers\"). As the helper deployment density increases, the backhaul capacity becomes the system bottleneck. In order to alleviate such bottleneck we propose a system where helpers with low-rate backhaul but high storage capacity cache popular video files. Files not available from helpers are transmitted by the cellular base station. We analyze the optimum way of assigning files to the helpers, in order to minimize the expected downloading time for files. We distinguish between the uncoded case (where only complete files are stored) and the coded case, where segments of Fountain-encoded versions of the video files are stored at helpers. We show that the uncoded optimum file assignment is NP-hard, and develop a greedy strategy that is provably within a factor 2 of the optimum. Further, for a special case we provide an efficient algorithm achieving a provably better approximation ratio of @math , where @math is the maximum number of helpers a user can be connected to. We also show that the coded optimum cache assignment problem is convex that can be further reduced to a linear program. We present numerical results comparing the proposed schemes.", "We consider the problem of storing segments of encoded versions of content files in a set of base stations located in a communication cell. These base stations work in conjunction with the main base station of the cell. Users move randomly across the space based on a discrete-time Markov chain model. At each time slot each user accesses a single base station based on it's current position and it can download only a part of the content stored in it, depending on the time slot duration. We assume that file requests must be satisfied within a given time deadline in order to be successful. If the amount of the downloaded (encoded) data by the accessed base stations when the time deadline expires does not suffice to recover the requested file, the main base station of the cell serves the request. Our aim is to find the storage allocation that minimizes the probability of using the main base station for file delivery. This problem is intractable in general. However, we show that the optimal solution of the problem can be efficiently attained in case that the time deadline is small. To tackle the general case, we propose a distributed approximation algorithm based on large deviation inequalities. Systematic experiments on a real world data set demonstrate the effectiveness of our proposed algorithms. Index Terms: Mobility-aware Caching, Markov Chain, MDS Coding, Small-cell Networks" ] }
1606.05282
2439620876
Caching at mobile devices can facilitate device-to-device (D2D) communications, which may significantly improve spectrum efficiency and alleviate the heavy burden on backhaul links. However, most previous works ignored user mobility, thus having limited practical applications. In this paper, we take advantage of the user mobility pattern by the inter-contact times between different users, and propose a mobility-aware caching placement strategy to maximize the data offloading ratio, which is defined as the percentage of the requested data that can be delivered via D2D links rather than through base stations (BSs). Given the NP-hard caching placement problem, we first propose an optimal dynamic programming (DP) algorithm to obtain a performance benchmark, but with much lower complexity than exhaustive search. We will then prove that the problem falls in the category of monotone submodular maximization over a matroid constraint, and propose a time-efficient greedy algorithm, which achieves an approximation ratio as 1 2. Simulation results with real-life data-sets will validate the effectiveness of our proposed mobility-aware caching placement strategy. We observe that users moving at either a very low or very high speed should cache the most popular files, while users moving at a medium speed should cache less popular files to avoid duplication.
There have been lots of efforts on D2D caching networks while assuming fixed network topologies. Considering a single cell scenario, it was shown in @cite_37 that D2D caching outperforms other schemes, including the conventional unicasting scheme, the harmonic broadcasting scheme, and the coded multicast scheme. Assuming that each device can store one file, Golrezaei analyzed the scaling behavior of the number of active links in a D2D caching network as the number of mobile devices increases @cite_3 . It was found that the concentration of the file request distribution affects the scaling laws, and three concentration regimes were identified. In @cite_31 @cite_40 , the outage-throughput tradeoff in D2D caching networks was investigated and optimal scaling laws of per-user throughput were derived as the numbers of mobile devices and files in the library grow under a simple uncoded protocol. Meanwhile, the case using the coded delivery scheme @cite_4 was investigated in @cite_1 .
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_1", "@cite_3", "@cite_40", "@cite_31" ], "mid": [ "1611674276", "2106248279", "1606035500", "1964721119", "2964101002", "2951849153" ], "abstract": [ "As wireless video is the fastest growing form of data traffic, methods for spectrally efficient on-demand wireless video streaming are essential to both service providers and users. A key property of video on-demand is the asynchronous content reuse , such that a few popular files account for a large part of the traffic but are viewed by users at different times. Caching of content on wireless devices in conjunction with device-to-device (D2D) communications allows to exploit this property, and provide a network throughput that is significantly in excess of both the conventional approach of unicasting from cellular base stations and the traditional D2D networks for “regular” data traffic. This paper presents in a tutorial and concise form some recent results on the throughput scaling laws of wireless networks with caching and asynchronous content reuse, contrasting the D2D approach with other alternative approaches such as conventional unicasting, harmonic broadcasting , and a novel coded multicasting approach based on caching in the user devices and network-coded transmission from the cellular base station only. Somehow surprisingly, the D2D scheme with spatial reuse and simple decentralized random caching achieves the same near-optimal throughput scaling law as coded multicasting. Both schemes achieve an unbounded throughput gain (in terms of scaling law) with respect to conventional unicasting and harmonic broadcasting, in the relevant regime where the number of video files in the library is smaller than the total size of the distributed cache capacity in the network. To better understand the relative merits of these competing approaches, we consider a holistic D2D system design incorporating traditional microwave (2 GHz) and millimeter-wave (mm-wave) D2D links; the direct connections to the base station can be used to provide those rare video requests that cannot be found in local caches. We provide extensive simulation results under a variety of system settings and compare our scheme with the systems that exploit transmission from the base station only. We show that, also in realistic conditions and nonasymptotic regimes, the proposed D2D approach offers very significant throughput gains.", "Caching is a technique to reduce peak traffic rates by prefetching popular content into memories at the end users. Conventionally, these memories are used to deliver requested content in part from a locally cached copy rather than through the network. The gain offered by this approach, which we term local caching gain, depends on the local cache size (i.e., the memory available at each individual user). In this paper, we introduce and exploit a second, global, caching gain not utilized by conventional caching schemes. This gain depends on the aggregate global cache size (i.e., the cumulative memory available at all users), even though there is no cooperation among the users. To evaluate and isolate these two gains, we introduce an information-theoretic formulation of the caching problem focusing on its basic structure. For this setting, we propose a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared with previously known schemes. In particular, the improvement can be on the order of the number of users in the network. In addition, we argue that the performance of the proposed scheme is within a constant factor of the information-theoretic optimum for all values of the problem parameters.", "We consider a wireless device-to-device (D2D) network where communication is restricted to be single-hop. Users make arbitrary requests from a finite library of files and have pre-cached information on their devices, subject to a per-node storage capacity constraint. A similar problem has already been considered in an infrastructure setting, where all users receive a common multicast (coded) message from a single omniscient server (e.g., a base station having all the files in the library) through a shared bottleneck link. In this paper, we consider a D2D infrastructureless version of the problem. We propose a caching strategy based on deterministic assignment of subpackets of the library files, and a coded delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands. We also consider a random caching strategy, which is more suitable to a fully decentralized implementation. Under certain conditions, both approaches can achieve the information theoretic outer bound within a constant multiplicative factor. In our previous work, we showed that a caching D2D wireless network with one-hop communication, random caching, and uncoded delivery (direct file transmissions) achieves the same throughput scaling law of the infrastructure-based coded multicasting scheme, in the regime of large number of users and files in the library. This shows that the spatial reuse gain of the D2D network is order-equivalent to the coded multicasting gain of single base station transmission. It is, therefore, natural to ask whether these two gains are cumulative, i.e., if a D2D network with both local communication (spatial reuse) and coded multicasting can provide an improved scaling law. Somewhat counterintuitively, we show that these gains do not cumulate (in terms of throughput scaling law). This fact can be explained by noticing that the coded delivery scheme creates messages that are useful to multiple nodes, such that it benefits from broadcasting to as many nodes as possible, while spatial reuse capitalizes on the fact that the communication is local, such that the same time slot can be reused in space across the network. Unfortunately, these two issues are in contrast with each other.", "We analyze a novel architecture for caching popular video content to enable wireless device-to-device (D2D) collaboration. We focus on the asymptotic scaling characteristics and show how they depend on video content popularity statistics. We identify a fundamental conflict between collaboration distance and interference and show how to optimize the transmission power to maximize frequency reuse. Our main result is a closed form expression of the optimal collaboration distance as a function of the model parameters. Under the common assumption of a Zipf distribution for content reuse, we show that if the Zipf exponent is greater than 1, it is possible to have a number of D2D interference-free collaboration pairs that scales linearly in the number of nodes. If the Zipf exponent is smaller than 1, we identify the best possible scaling in the number of D2D collaborating links. Surprisingly, a very simple distributed caching policy achieves the optimal scaling behavior.", "We consider a wireless device-to-device (D2D) network where the nodes have precached information from a library of available files. Nodes request files at random. If the requested file is not in the on-board cache, then it is downloaded from some neighboring node via one-hop local communication. An outage event occurs when a requested file is not found in the neighborhood of the requesting node, or if the network admission control policy decides not to serve the request. We characterize the optimal throughput-outage tradeoff in terms of tight scaling laws for various regimes of the system parameters, when both the number of nodes and the number of files in the library grow to infinity. Our analysis is based on Gupta and Kumar protocol model for the underlying D2D wireless network, widely used in the literature on capacity scaling laws of wireless networks without caching. Our results show that the combination of D2D spectrum reuse and caching at the user nodes yields a per-user throughput independent of the number of users, for any fixed outage probability in (0, 1). This implies that the D2D caching network is scalable: even though the number of users increases, each user achieves constant throughput. This behavior is very different from the classical Gupta and Kumar result on ad hoc wireless networks, for which the per-user throughput vanishes as the number of users increases. Furthermore, we show that the user throughput is directly proportional to the fraction of cached information over the whole file library size. Therefore, we can conclude that D2D caching networks can turn memory into bandwidth (i.e., doubling the on-board cache memory on the user devices yields a 100 increase of the user throughout).", "We consider a wireless device-to-device (D2D) network where the nodes have cached information from a library of possible files. Inspired by the current trend in the standardization of the D2D mode for 4th generation wireless networks, we restrict to one-hop communication: each node place a request to a file in the library, and downloads from some other node which has the requested file in its cache through a direct communication link, without going through a base station. We describe the physical layer communication through a simple \"protocol-model\", based on interference avoidance (independent set scheduling). For this network we define the outage-throughput tradeoff problem and characterize the optimal scaling laws for various regimes where both the number of nodes and the files in the library grow to infinity." ] }
1606.05312
2440926996
Transfer in reinforcement learning refers to the notion that generalization should occur not only within a task but also across tasks. We propose a transfer framework for the scenario where the reward function changes between tasks but the environment's dynamics remain the same. Our approach rests on two key ideas: "successor features", a value function representation that decouples the dynamics of the environment from the rewards, and "generalized policy improvement", a generalization of dynamic programming's policy improvement operation that considers a set of policies rather than a single one. Put together, the two ideas lead to an approach that integrates seamlessly within the reinforcement learning framework and allows the free exchange of information across tasks. The proposed method also provides performance guarantees for the transferred policy even before any learning has taken place. We derive two theorems that set our approach in firm theoretical ground and present experiments that show that it successfully promotes transfer in practice, significantly outperforming alternative methods in a sequence of navigation tasks and in the control of a simulated robotic arm.
Although the methods above decouple the construction of features from the actual RL problem, it is also possible to tackle both problems concomitantly, using general nonlinear function approximators to incrementally learn @math @cite_1 . Another interesting possibility is the definition of a clear protocol to also learn @math , which is closely related to the problem known as multi-task feature learning'' argyriou2008convex . Here again the use of nonlinear approximators may be useful, since with them it may be possible to embed an arbitrary family of MDPs into a model with the structure shown in ) mikolov2013efficient .
{ "cite_N": [ "@cite_1" ], "mid": [ "1822705290" ], "abstract": [ "Learning capabilities of computer systems still lag far behind biological systems. One of the reasons can be seen in the inefficient re-use of control knowledge acquired over the lifetime of the artificial learning system. To address this deficiency, this paper presents a learning architecture which transfers control knowledge in the form of behavioral skills and corresponding representation concepts from one task to subsequent learning tasks. The presented system uses this knowledge to construct a more compact state space representation for learning while assuring bounded optimality of the learned task policy by utilizing a representation hierarchy. Experimental results show that the presented method can significantly outperform learning on a flat state space representation and the MAXQ method for hierarchical reinforcement learning." ] }
1606.05029
2740883175
First-order factoid question answering assumes that the question can be answered by a single fact in a knowledge base (KB). While this does not seem like a challenging task, many recent attempts that apply either complex linguistic reasoning or deep neural networks achieve 65 -76 accuracy on benchmark sets. Our approach formulates the task as two machine learning problems: detecting the entities in the question, and classifying the question as one of the relation types in the KB. We train a recurrent neural network to solve each problem. On the SimpleQuestions dataset, our approach yields substantial improvements over previously published results --- even neural networks based on much more complex architectures. The simplicity of our approach also has practical advantages, such as efficiency and modularity, that are valuable especially in an industry setting. In fact, we present a preliminary analysis of the performance of our model on real queries from Comcast's X1 entertainment platform with millions of users every day.
If knowledge is presented in a structured form (e.g., knowledge base (KB)), the standard approach to QA is to transform the question and knowledge into a compatible form, and perform reasoning to determine which fact in the KB answers a given question. Examples of this approach include pattern-based question analyzers @cite_18 , combination of syntactic parsing and semantic role labeling @cite_8 @cite_20 , as well as lambda calculus @cite_16 and combinatory categorical grammars (CCG) @cite_15 . A downside of these approaches is the reliance on linguistic resources heuristics, making them language- and or domain-specific. Even though claim that their approach requires less supervision than prior work, it still relies on many English-specific heuristics and hand-crafted features. Also, their most accurate model uses a corpus of paraphrases to generalize to linguistic diversity. Linguistic parsers can also be too slow for real-time applications.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_15", "@cite_16", "@cite_20" ], "mid": [ "1997642925", "2025910815", "2126170172", "2252136820", "1975073368" ], "abstract": [ "In this paper, we present a Question Answering system based on redundancy and a Passage Retrieval method that is specifically oriented to Question Answering. We suppose that in a large enough document collection the answer to a given question may appear in several different forms. Therefore, it is possible to find one or more sentences that contain the answer and that also include tokens from the original question. The Passage Retrieval engine is almost language-independent since it is based on n-gram structures. Question classification and answer extraction modules are based on shallow patterns.", "Bag-of-words retrieval is popular among Question Answering (QA) system developers, but it does not support constraint checking and ranking on the linguistic and semantic information of interest to the QA system. We present anapproach to retrieval for QA, applying structured retrieval techniques to the types of text annotations that QA systems use. We demonstrate that the structured approach can retrieve more relevant results, more highly ranked, compared with bag-of-words, on a sentence retrieval task. We also characterize the extent to which structured retrieval effectiveness depends on the quality of the annotations.", "In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the F REE 917 and W EB Q UESTIONS benchmark datasets show our semantic parser improves over the state of the art.", "In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from question-answer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their state-of-the-art parser. Additionally, we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline.", "This work presents a general rank-learning framework for passage ranking within Question Answering (QA) systems using linguistic and semantic features. The framework enables query-time checking of complex linguistic and semantic constraints over keywords. Constraints are composed of a mixture of keyword and named entity features, as well as features derived from semantic role labeling. The framework supports the checking of constraints of arbitrary length relating any number of keywords. We show that a trained ranking model using this rich feature set achieves greater than a 20 improvement in Mean Average Precision over baseline keyword retrieval models. We also show that constraints based on semantic role labeling features are particularly effective for passage retrieval; when they can be leveraged, an 40 improvement in MAP over the baseline can be realized." ] }
1606.05029
2740883175
First-order factoid question answering assumes that the question can be answered by a single fact in a knowledge base (KB). While this does not seem like a challenging task, many recent attempts that apply either complex linguistic reasoning or deep neural networks achieve 65 -76 accuracy on benchmark sets. Our approach formulates the task as two machine learning problems: detecting the entities in the question, and classifying the question as one of the relation types in the KB. We train a recurrent neural network to solve each problem. On the SimpleQuestions dataset, our approach yields substantial improvements over previously published results --- even neural networks based on much more complex architectures. The simplicity of our approach also has practical advantages, such as efficiency and modularity, that are valuable especially in an industry setting. In fact, we present a preliminary analysis of the performance of our model on real queries from Comcast's X1 entertainment platform with millions of users every day.
In contrast, an RNN can detect entities in the question with high accuracy and low latency. The only required resources are word embeddings and a set of questions with entity words tagged. The former can be easily trained for any language domain in an unsupervised fashion, given a large text corpus without annotations @cite_1 @cite_24 . The latter is a relatively simple annotation task that exists for many languages and domains, and it can also be synthetically generated. Many researchers have explored similar techniques for general NLP tasks @cite_0 , such as named entity recognition @cite_10 @cite_3 , sequence labeling @cite_14 @cite_9 , part-of-speech tagging @cite_11 @cite_6 , chunking @cite_11 .
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_24", "@cite_0", "@cite_10", "@cite_11" ], "mid": [ "2144499799", "", "2950133940", "2042188227", "1788779403", "2250539671", "2158899491", "2950635152", "1940872118" ], "abstract": [ "Recurrent neural networks are powerful sequence learners. They are able to incorporate context information in a flexible way, and are robust to localised distortions of the input data. These properties make them well suited to sequence labelling, where input sequences are transcribed with streams of labels. The aim of this thesis is to advance the state-of-the-art in supervised sequence labelling with recurrent networks. Its two main contributions are (1) a new type of output layer that allows recurrent networks to be trained directly for sequence labelling tasks where the alignment between the inputs and the labels is unknown, and (2) an extension of the long short-term memory network architecture to multidimensional data, such as images and video sequences.", "", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "In this approach to named entity recognition, a recurrent neural network, known as Long Short-Term Memory, is applied. The network is trained to perform 2 passes on each sentence, outputting its decisions on the second pass. The first pass is used to acquire information for disambiguation during the second pass. SARDNET, a self-organising map for sequences is used to generate representations for the lexical items presented to the LSTM network, whilst orthogonal representations are used to represent the part of speech and chunk tags.", "Bidirectional Long Short-Term Memory Recurrent Neural Network (BLSTM-RNN) has been shown to be very effective for tagging sequential data, e.g. speech utterances or handwritten documents. While word embedding has been demoed as a powerful representation for characterizing the statistical properties of natural language. In this study, we propose to use BLSTM-RNN with word embedding for part-of-speech (POS) tagging task. When tested on Penn Treebank WSJ test set, a state-of-the-art performance of 97.40 tagging accuracy is achieved. Without using morphological features, this approach can also achieve a good performance comparable with the Stanford POS tagger.", "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.", "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.", "In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.", "In this paper, we propose a variety of Long Short-Term Memory (LSTM) based models for sequence tagging. These models include LSTM networks, bidirectional LSTM (BI-LSTM) networks, LSTM with a Conditional Random Field (CRF) layer (LSTM-CRF) and bidirectional LSTM with a CRF layer (BI-LSTM-CRF). Our work is the first to apply a bidirectional LSTM CRF (denoted as BI-LSTM-CRF) model to NLP benchmark sequence tagging data sets. We show that the BI-LSTM-CRF model can efficiently use both past and future input features thanks to a bidirectional LSTM component. It can also use sentence level tag information thanks to a CRF layer. The BI-LSTM-CRF model can produce state of the art (or close to) accuracy on POS, chunking and NER data sets. In addition, it is robust and has less dependence on word embedding as compared to previous observations." ] }
1606.05029
2740883175
First-order factoid question answering assumes that the question can be answered by a single fact in a knowledge base (KB). While this does not seem like a challenging task, many recent attempts that apply either complex linguistic reasoning or deep neural networks achieve 65 -76 accuracy on benchmark sets. Our approach formulates the task as two machine learning problems: detecting the entities in the question, and classifying the question as one of the relation types in the KB. We train a recurrent neural network to solve each problem. On the SimpleQuestions dataset, our approach yields substantial improvements over previously published results --- even neural networks based on much more complex architectures. The simplicity of our approach also has practical advantages, such as efficiency and modularity, that are valuable especially in an industry setting. In fact, we present a preliminary analysis of the performance of our model on real queries from Comcast's X1 entertainment platform with millions of users every day.
Deep learning techniques have been studied extensively for constructing parallel neural networks for modeling a joint probability distribution for question-answer pairs @cite_7 @cite_22 @cite_13 @cite_4 and re-ranking answers output by a retrieval engine @cite_2 @cite_17 . These more complex approaches might be needed for general-purpose QA and sentence similarity, where one cannot make assumptions about the structure of the input or knowledge. However, as noted in , first-order factoid questions can be represented by an entity and a relation type, and the answer is usually stored in a structured knowledge base. similarly assume that the answer to a question is at most two hops away from the target entity. However, they do not propose how to obtain the target entity, since it is provided as part of their dataset. take advantage of the KB structure by projecting entities, relations, and subgraphs into the same latent space. In addition to finding the target entity, the other key information to first-order QA is the relation type corresponding to the question. Many researchers have shown that classifying the question into one of the pre-defined types (e.g., based on patterns @cite_5 or support vector machines @cite_18 ) improves QA accuracy.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_7", "@cite_2", "@cite_5", "@cite_13", "@cite_17" ], "mid": [ "1997642925", "2508865106", "2251289180", "2303829361", "2144012961", "2250861254", "2251427843", "" ], "abstract": [ "In this paper, we present a Question Answering system based on redundancy and a Passage Retrieval method that is specifically oriented to Question Answering. We suppose that in a large enough document collection the answer to a given question may appear in several different forms. Therefore, it is possible to find one or more sentences that contain the answer and that also include tokens from the original question. The Passage Retrieval engine is almost language-independent since it is based on n-gram structures. Question classification and answer extraction modules are based on shallow patterns.", "We present a siamese adaptation of the Long Short-Term Memory (LSTM) network for labeled data comprised of pairs of variable-length sequences. Our model is applied to assess semantic similarity between sentences, where we exceed state of the art, outperforming carefully handcrafted features and recently proposed neural network systems of greater complexity. For these applications, we provide word-embedding vectors supplemented with synonymic information to the LSTMs, which use a fixed size vector to encode the underlying meaning expressed in a sentence (irrespective of the particular wording syntax). By restricting subsequent operations to rely on a simple Manhattan metric, we compel the sentence representations learned by our model to form a highly structured space whose geometry reflects complex semantic relationships. Our results are the latest in a line of findings that showcase LSTMs as powerful language models capable of tasks requiring intricate understanding.", "Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.", "We apply a general recurrent neural network (RNN) encoder framework to community question answering (cQA) tasks. Our approach does not rely on any linguistic processing, and can be applied to different languages or domains. Further improvements are observed when we extend the RNN encoders with a neural attention mechanism that encourages reasoning over entire sequences. To deal with practical issues such as data sparsity and imbalanced labels, we apply various techniques such as transfer learning and multitask learning. Our experiments on the SemEval-2016 cQA task show 10 improvement on a MAP score compared to an information retrieval-based approach, and achieve comparable performance to a strong handcrafted feature-based method.", "Recurrent neural networks (RNNs) are connectionist models of sequential data that are naturally applicable to the analysis of natural language. Recently, “depth in space” — as an orthogonal notion to “depth in time” — in RNNs has been investigated by stacking multiple layers of RNNs and shown empirically to bring a temporal hierarchy to the architecture. In this work we apply these deep RNNs to the task of opinion expression extraction formulated as a token-level sequence-labeling task. Experimental results show that deep, narrow RNNs outperform traditional shallow, wide RNNs with the same number of parameters. Furthermore, our approach outperforms previous CRF-based baselines, including the state-of-the-art semi-Markov CRF model, and does so without access to the powerful opinion lexicons and syntactic features relied upon by the semi-CRF, as well as without the standard layer-by-layer pre-training typically required of RNN architectures.", "Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2 improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2 unlabeled attachment score on the English Penn Treebank.", "Modeling sentence similarity is complicated by the ambiguity and variability of linguistic expression. To cope with these challenges, we propose a model for comparing sentences that uses a multiplicity of perspectives. We first model each sentence using a convolutional neural network that extracts features at multiple levels of granularity and uses multiple types of pooling. We then compare our sentence representations at several granularities using multiple similarity metrics. We apply our model to three tasks, including the Microsoft Research paraphrase identification task and two SemEval semantic textual similarity tasks. We obtain strong performance on all tasks, rivaling or exceeding the state of the art without using external resources such as WordNet or parsers.", "" ] }
1606.05179
2952614399
Campaigners, advertisers and activists are increasingly turning to social recommendation mechanisms, provided by social media, for promoting their products, services, brands and even ideas. However, many times, such social network based campaigns perform poorly in practice because the intensity of the recommendations drastically reduces beyond a few hops from the source. A natural strategy for maintaining the intensity is to provide incentives. In this paper, we address the problem of minimizing the cost incurred by the campaigner for incentivizing a fraction of individuals in the social network, while ensuring that the campaign message reaches a given expected fraction of individuals. We also address the dual problem of maximizing the campaign penetration for a resource constrained campaigner. To help us understand and solve the above mentioned problems, we use percolation theory to formally state them as optimization problems. These problems are not amenable to traditional approaches because of a fixed point equation that needs to be solved numerically. However, we use results from reliability theory to establish some key properties of the fixed point, which in turn enables us to solve these problems using algorithms that are linearithmic in maximum node degree. Furthermore, we evaluate the efficacy of the analytical solution by performing simulations on real world networks.
The problem of computing the optimal referral payment mechanisms that maximize profit was studied in @cite_20 , by modelling the referral process as a network game. The authors in @cite_20 conclude that a combination of linear payment mechanism (linear in the number of referrals) and threshold payment mechanism (payment only when number of referral exceeds a threshold) approximates the optimal pricing scheme. In this paper, we focus on the set of nodes to be incentivized while assuming that a pricing scheme, which can be computed based on the results in @cite_20 , is provided by the campaigner. Similar problems involving the computation of referral rewards in real time, for maximizing the campaign spread, were studied using the theory of optimal control in @cite_13 @cite_21 @cite_14 @cite_16 @cite_24 .
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_24", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "2081222729", "2296066287", "2088674014", "2094391258", "1979059935", "" ], "abstract": [ "Information spreading in a population can be modeled as an epidemic. Campaigners (e.g., election campaign managers, companies marketing products or movies) are interested in spreading a message by a given deadline, using limited resources. In this paper, we formulate the above situation as an optimal control problem and the solution (using Pontryagin's Maximum Principle) prescribes an optimal resource allocation over the time of the campaign. We consider two different scenarios-in the first, the campaigner can adjust a direct control (over time) which allows her to recruit individuals from the population (at some cost) to act as spreaders for the Susceptible-Infected-Susceptible (SIS) epidemic model. In the second case, we allow the campaigner to adjust the effective spreading rate by incentivizing the infected in the Susceptible-Infected-Recovered (SIR) model, in addition to the direct recruitment. We consider time varying information spreading rate in our formulation to model the changing interest level of individuals in the campaign, as the deadline is reached. In both the cases, we show the existence of a solution and its uniqueness for sufficiently small campaign deadlines. For the fixed spreading rate, we show the effectiveness of the optimal control strategy against the constant control strategy, a heuristic control strategy and no control. We show the sensitivity of the optimal control to the spreading rate profile when it is time varying.", "We consider the problem of devising incentive strategies for viral marketing of a product. In particular, we assume that the seller can influence penetration of the product by offering two incentive programs: a) direct incentives to potential buyers (influence) and b) referral rewards for customers who influence potential buyers to make the purchase (exploit connections). The problem is to determine the optimal timing of these programs over a finite time horizon. In contrast to algorithmic perspective popular in the literature, we take a mean-field approach and formulate the problem as a continuous-time deterministic optimal control problem. We show that the optimal strategy for the seller has a simple structure and can take both forms, namely, influence-and-exploit and exploit-and-influence. We also show that in some cases it may optimal for the seller to deploy incentive programs mostly for low degree nodes. We support our theoretical results through numerical studies and provide practical insights by analyzing various scenarios.", "We study the optimal control problem of maximizing the spread of an information epidemic on a social network. Information propagation is modeled as a susceptible-infected (SI) process, and the campaign budget is fixed. Direct recruitment and word-of-mouth incentives are the two strategies to accelerate information spreading (controls). We allow for multiple controls depending on the degree of the nodes individuals. The solution optimally allocates the scarce resource over the campaign duration and the degree class groups. We study the impact of the degree distribution of the network on the controls and present results for Erdős-Renyi and scale-free networks. Results show that more resource is allocated to high-degree nodes in the case of scale-free networks, but medium-degree nodes in the case of Erdős-Renyi networks. We study the effects of various model parameters on the optimal strategy and quantify the improvement offered by the optimal strategy over the static and bang-bang control strategies. The effect of the time-varying spreading rate on the controls is explored as the interest level of the population in the subject of the campaign may change over time. We show the existence of a solution to the formulated optimal control problem, which has nonlinear isoperimetric constraints, using novel techniques that is general and can be used in other similar optimal control problems. This work may be of interest to political, social awareness, or crowdfunding campaigners and product marketing managers, and with some modifications may be used for mitigating biological epidemics.", "We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns. (C) 2014 Elsevier B.V. All rights reserved.", "In this work we address two problems concerning information propagation in a population: a) how to maximize the spread of a given message in the population within the stipulated time and b) how to create a given level of buzz- measured by the fraction of the population engaged in conversation on a topic of interest- at a specified time horizon. Arising in the context of two rather disparate networks- social and wireless vehicular networks, their importance in campaigning on social networks and security in vehicular networks can only be understated. Taking a mean-field route, we pose the two problems as continuous-time deterministic optimal control problems. We characterize optimal controls, present some numerical results and provide practical insights. Interestingly, the problems we address are antithesis to those in disease epidemics, which need to be contained rather than actively spread.", "" ] }
1606.04935
2429071212
In this paper, we first propose a redundant radix based number (RBN) representation for encoding the data to be transmitted in a wireless network. This RBN encoding uses three possible values - 0, 1 and @math , for each digit to be transmitted. We then propose to use silent periods (zero energy transmission) for transmitting the 0's in the RBN encoded data thus obtained. This is in contrast to most conventional communication strategies that utilize energy based transmission (EbT) schemes, where energy expenditure occurs for transmitting both 0 and 1 bit values. The binary to RBN conversion algorithm presented here offers a significant reduction in the number of non-zero bits in the resulting RBN encoded data. As a result, it provides a highly energy-efficient technique for data transmission with silent periods for transmitting 0's. We simulated our proposed technique with ideal radio device characteristics and also with parameters of various commercially available radio devices. Experimental results on various benchmark suites show that with ideal as well as some commercial device characteristics, our proposed transmission scheme requires 69 less energy on an average, compared to the energy based transmission schemes. This makes it very attractive for application scenarios where the devices are highly energy constrained. Finally, based on this transmission strategy, we have designed a MAC protocol that would support the communication of such RBN encoded data frames.
Recent research efforts on reducing energy consumption have mainly been focussed on the MAC layer design @cite_3 , optimizing data transmissions by reducing collisions and retransmissions @cite_1 and through intelligent selection of paths or special architectures for sending data @cite_4 . A survey of routing protocols in wireless sensor networks can be found in @cite_3 . In all such schemes, the underlying communication strategy of sending a string of binary bits is energy based transmissions (EbT) @cite_9 @cite_6 , which implies that the communication of any information between two nodes involves the expenditure of energy for the transmission of data bits. In @cite_9 , a new communication strategy called Communication through Silence (CtS) has been proposed that involves the use of silent periods as opposed to energy based transmissions. CtS, however, suffers from the disadvantage of being exponential in time. An alternative strategy, called Variable-base tacit communication (VarBaTaC) has been proposed in @cite_6 that uses a variable radix-based information coding coupled with CtS for communication.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_1", "@cite_6", "@cite_3" ], "mid": [ "1971750044", "1997414300", "2168503112", "2100308512", "2107710215" ], "abstract": [ "A wireless sensor network consists of many energy-autonomous microsensors distributed throughout an area of interest. Each node monitors its local environment, locally processing and storing the collected data so that other nodes can use it. To optimize power consumption, the Swiss Center for Electronics and Microtechnology has developed WiseNET, an ultralow-power platform for the implementation of wireless sensor networks that achieves low-power operation through a careful codesign approach. The WiseNET platform uses a codesign approach that combines a dedicated duty-cycled radio with WiseMAC, a low-power media access control protocol, and a complex system-on-chip sensor node to exploit the intimate relationship between MAC-layer performance and radio transceiver parameters. The WiseNET solution consumes about 100 times less power than comparable solutions.", "Wireless sensor networks (WSNs) are typically characterized by a limited energy supply at sensor nodes. Hence, energy efficiency is an important issue in the system design and operation of WSNs. In this paper, we introduce a novel communication paradigm that enables energy-efficient information delivery in wireless sensor networks. Compared with traditional communication strategies, the proposed scheme explores a new dimension - time, to deliver information efficiently. We refer to the strategy as Communication through Silence (CtS). We identify a key drawback of CtS - energy - throughput trade-off, and explore optimization mechanisms that can alleviate the trade-off. We then present several challenges that need to be overcome, primarily at the medium access control layer of the network protocol stack, in order to realize CtS effectively.", "We present a deterministic broadcast algorithm for the class of mobile ad hoc networks where the mobile nodes possess collision detection capabilities. The algorithm is based on a breadth-first traversal of the network, allows multiple transmissions in the same time slot and completes broadcast in O(n log n) time in the worst case. It is mobility resilient even for networks where topology changes are very frequent. The idea of this broadcast algorithm is then extended to develop a gossiping algorithm having O((n+D)Dlog n) worst case run time, where D is the diameter of the network graph, which is an improvement over the existing algorithms.", "Energy conservation is a major concern in wireless sensor networking. Conventionally in wireless communications, each bit transmitted by a node consumes one unit of energy. Some recent advances, however, explore silent time intervals between signal transmissions to convey information (Zhu and Sivakumar [1]). Such a scheme of Communication through Silence (CtS), while reducing energy consumption for sensor nodes, introduces long delay. In this paper, we propose Variable-Base Tacit Communication (VarBaTaC) to mitigate the delay introduced by CtS. We also develop three MAC protocols based on VarBaTaC for different environments. We then outline experiment designs for further investigations and point out some interesting future research directions.", "Wireless sensor networks are appealing to researchers due to their wide range of application potential in areas such as target detection and tracking, environmental monitoring, industrial process monitoring, and tactical systems. However, low sensing ranges result in dense networks and thus it becomes necessary to achieve an efficient medium-access protocol subject to power constraints. Various medium-access control (MAC) protocols with different objectives have been proposed for wireless sensor networks. In this article, we first outline the sensor network properties that are crucial for the design of MAC layer protocols. Then, we describe several MAC protocols proposed for sensor networks, emphasizing their strengths and weaknesses. Finally, we point out open research issues with regard to MAC layer design." ] }
1606.05134
2440677991
We describe an approach that uses combinatorial optimization and machine learning to share the work between the host and device of heterogeneous computing systems such that the overall application execution time is minimized. We propose to use combinatorial optimization to search for the optimal system configuration in the given parameter space (such as, the number of threads, thread affinity, work distribution for the host and device). For each system configuration that is suggested by combinatorial optimization, we use machine learning for evaluation of the system performance. We evaluate our approach experimentally using a heterogeneous platform that comprises two 12-core Intel Xeon E5 CPUs and an Intel Xeon Phi 7120P co-processor with 61 cores. Using our approach we are able to find a near-optimal system configuration by performing only about 5 of all possible experiments.
A dynamic scheduling framework that divides tasks into smaller ones is proposed by Ravi and Agrawal @cite_19 . These task are distributed across different processing elements in a task-farm way. While making scheduling decisions, architectural trade-offs, computation and communication patterns are considered. Our approach considers only system runtime configuration and the input size that makes it a more general approach, which can be used with different applications and architecture.
{ "cite_N": [ "@cite_19" ], "mid": [ "2031553682" ], "abstract": [ "A trend that has materialized, and has given rise to much attention, is of the increasingly heterogeneous computing platforms. Recently, it has become very common for a desktop or a notebook computer to be equipped with both a multi-core CPU and a GPU. Application development for exploiting the aggregate computing power of such an environment is a major challenge today. Particularly, we need dynamic work distribution schemes that are adaptable to different computation and communication patterns in applications, and to various heterogeneous configurations. This paper describes a general dynamic scheduling framework for mapping applications with different communication patterns to heterogeneous architectures. We first make key observations about the architectural tradeoffs among heterogeneous resources and the communication pattern of an application, and then infer constraints for the dynamic scheduler. We then present a novel cost model for choosing the optimal chunk size in a heterogeneous configuration. Finally, based on general framework and cost model we provide optimized work distribution schemes to further improve the performance." ] }
1606.05134
2440677991
We describe an approach that uses combinatorial optimization and machine learning to share the work between the host and device of heterogeneous computing systems such that the overall application execution time is minimized. We propose to use combinatorial optimization to search for the optimal system configuration in the given parameter space (such as, the number of threads, thread affinity, work distribution for the host and device). For each system configuration that is suggested by combinatorial optimization, we use machine learning for evaluation of the system performance. We evaluate our approach experimentally using a heterogeneous platform that comprises two 12-core Intel Xeon E5 CPUs and an Intel Xeon Phi 7120P co-processor with 61 cores. Using our approach we are able to find a near-optimal system configuration by performing only about 5 of all possible experiments.
@cite_13 combines the pragma-based XcalableMP (XMP) @cite_22 programming language with StarPU runtime system to utilize resources on each heterogeneous node for work distribution of the loop executions. XMP is used for work distribution and synchronization, whereas StarPU is used for task scheduling.
{ "cite_N": [ "@cite_13", "@cite_22" ], "mid": [ "1991315896", "2046062581" ], "abstract": [ "In this paper, we propose a solution framework to enable the work sharing of parallel processing by the coordination of CPUs and GPUs on hybrid PC clusters based on the high-level parallel language XcalableMPdev. Basic XcalableMP enables high-level parallel programming using sequential code directives that support data distribution and loop task distribution among multiple nodes on a PC cluster. XcalableMP-dev is an extension of XcalableMP for a hybrid PC cluster, where each node is equipped with accelerated computing devices such as GPUs, many-core environments, etc. Our new framework proposed here, named XcalableMP-dev Star PU, enables the distribution of data and loop execution among multiple GPUs and multiple CPU cores on each node. We employ a Star PU run-time system for task management with dynamic load balancing. Because of the large performance gap between CPUs and GPUs, the key issue for work sharing among CPU and GPU resources is the task size control assigned to different devices. Since the compiler of the new system is still under construction, we evaluated the performance of hybrid work sharing among four nodes of a GPU cluster and confirmed that the performance gain by the traditional XcalableMP-dev system on NVIDIA CUDA is up to 1.4 times faster than GPU-only execution.", "XcalableMP is a parallel extension of existing languages, such as C and Fortran, that was proposed as a new programming model to facilitate program parallel applications for distributed memory systems. In order to investigate the performance of parallel programs written in XcalableMP, we have implemented NAS Parallel Benchmarks, specifically, the Embarrassingly Parallel (EP) benchmark, the Integer Sort (IS) benchmark, and the Conjugate Gradient (CG) benchmark, using XcalableMP. The results show that the performance of XcalableMP is comparable to that of MPI. In particular, the performances of IS with a histogram and CG with two-dimensional parallelization achieve almost the same performance. The results also demonstrate that XcalableMP allows a programmer to write efficient parallel applications at a lower programming cost." ] }
1606.05134
2440677991
We describe an approach that uses combinatorial optimization and machine learning to share the work between the host and device of heterogeneous computing systems such that the overall application execution time is minimized. We propose to use combinatorial optimization to search for the optimal system configuration in the given parameter space (such as, the number of threads, thread affinity, work distribution for the host and device). For each system configuration that is suggested by combinatorial optimization, we use machine learning for evaluation of the system performance. We evaluate our approach experimentally using a heterogeneous platform that comprises two 12-core Intel Xeon E5 CPUs and an Intel Xeon Phi 7120P co-processor with 61 cores. Using our approach we are able to find a near-optimal system configuration by performing only about 5 of all possible experiments.
Qilin @cite_25 is a programming system that is based on a regression model to predict the execution time of kernels. Similarly to our approach, it uses off-line learning that is thereafter used in compile time to predict the execution time for different input size and system configuration.
{ "cite_N": [ "@cite_25" ], "mid": [ "2150476673" ], "abstract": [ "Heterogeneous multiprocessors are increasingly important in the multi-core era due to their potential for high performance and energy efficiency. In order for software to fully realize this potential, the step that maps computations to processing elements must be as automated as possible. However, the state-of-the-art approach is to rely on the programmer to specify this mapping manually and statically. This approach is not only labor intensive but also not adaptable to changes in runtime environments like problem sizes and hardware software configurations. In this study, we propose adaptive mapping, a fully automatic technique to map computations to processing elements on a CPU+GPU machine. We have implemented it in our experimental heterogeneous programming system called Qilin. Our results show that, by judiciously distributing works over the CPU and GPU, automatic adaptive mapping achieves a 25 reduction in execution time and a 20 reduction in energy consumption than static mappings on average for a set of important computation benchmarks. We also demonstrate that our technique is able to adapt to changes in the input problem size and system configuration." ] }
1606.04669
2436514525
Given an array @math of @math elements and a value @math , a frequent item or @math -majority element is an element occurring in @math more than @math times. The @math -majority problem requires finding all of the @math -majority elements. In this paper we deal with parallel shared-memory algorithms for frequent items; we present a shared-memory version of the Space Saving algorithm and we study its behavior with regard to accuracy and performance on many and multi-core processors, including the Intel Phi accelerator. We also investigate a hybrid MPI OpenMP version against a pure MPI based version. Through extensive experimental results we prove that the MPI OpenMP parallel version of the algorithm significantly enhances the performance of the earlier pure MPI version of the same algorithm. Results also prove that for this algorithm the Intel Phi accelerator does not introduce any improvement with respect to the Xeon octa-core processor.
The @math -majority problem has been solved sequentially first by Misra and Gries @cite_23 . @cite_18 and @cite_11 proposed independently optimal algorithms which, however, are identical to the Misra and Gries algorithm. , the algorithm designed by exploits better data structures (a doubly linked list of groups, supporting decrementing a set of counters at once in @math time) and achieves a worst-case complexity of @math . The algorithm devised by is based on hashing and therefore achieves the @math bound on average.
{ "cite_N": [ "@cite_18", "@cite_23", "@cite_11" ], "mid": [ "2597765082", "", "2113139394" ], "abstract": [ "We consider a router on the Internet analyzing the statistical properties of a TCP IP packet stream. A fundamental difficulty with measuring traffic behavior on the Internet is that there is simply too much data to be recorded for later analysis, on the order of gigabytes a second. As a result, network routers can collect only relatively few statistics about the data. The central problem addressed here is to use the limited memory of routers to determine essential features of the network traffic stream. A particularly difficult and representative subproblem is to determine the top k categories to which the most packets belong, for a desired value of k and for a given notion of categorization such as the destination IP address. We present an algorithm that deterministically finds (in particular) all categories having a frequency above 1 (m+1) using m counters, which we prove is best possible in the worst case. We also present a sampling-based algorithm for the case that packet categories follow an arbitrary distribution, but their order over time is permuted uniformly at random. Under this model, our algorithm identifies flows above a frequency threshold of roughly 1 √nm with high probability, where m is the number of counters and n is the number of packets observed. This guarantee is not far off from the ideal of identifying all flows (probability 1 n), and we prove that it is best possible up to a logarithmic factor. We show that the algorithm ranks the identified flows according to frequency within any desired constant factor of accuracy.", "", "We present a simple, exact algorithm for identifying in a multiset the items with frequency more than a threshold θ. The algorithm requires two passes, linear time, and space 1 θ. The first pass is an on-line algorithm, generalizing a well-known algorithm for finding a majority element, for identifying a set of at most 1 θ items that includes, possibly among others, all items with frequency greater than θ." ] }
1606.04669
2436514525
Given an array @math of @math elements and a value @math , a frequent item or @math -majority element is an element occurring in @math more than @math times. The @math -majority problem requires finding all of the @math -majority elements. In this paper we deal with parallel shared-memory algorithms for frequent items; we present a shared-memory version of the Space Saving algorithm and we study its behavior with regard to accuracy and performance on many and multi-core processors, including the Intel Phi accelerator. We also investigate a hybrid MPI OpenMP version against a pure MPI based version. Through extensive experimental results we prove that the MPI OpenMP parallel version of the algorithm significantly enhances the performance of the earlier pure MPI version of the same algorithm. Results also prove that for this algorithm the Intel Phi accelerator does not introduce any improvement with respect to the Xeon octa-core processor.
The problem of merging two data summaries naturally arises in a distributed or parallel setting, in which a data set is partitioned between two or among several data sets. The goal in this context is to merge two data summaries into a single summary which provides candidate frequent items for the union of the input data sets. In particular, in order for the merged summary to be useful, it is required that its size and error bounds are those of the input data summaries. A few years ago we designed an algorithm @cite_10 @cite_7 for merging in parallel counter-based data summaries which are the output of the @cite_18 algorithm.
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_7" ], "mid": [ "2597765082", "2002479415", "2573661442" ], "abstract": [ "We consider a router on the Internet analyzing the statistical properties of a TCP IP packet stream. A fundamental difficulty with measuring traffic behavior on the Internet is that there is simply too much data to be recorded for later analysis, on the order of gigabytes a second. As a result, network routers can collect only relatively few statistics about the data. The central problem addressed here is to use the limited memory of routers to determine essential features of the network traffic stream. A particularly difficult and representative subproblem is to determine the top k categories to which the most packets belong, for a desired value of k and for a given notion of categorization such as the destination IP address. We present an algorithm that deterministically finds (in particular) all categories having a frequency above 1 (m+1) using m counters, which we prove is best possible in the worst case. We also present a sampling-based algorithm for the case that packet categories follow an arbitrary distribution, but their order over time is permuted uniformly at random. Under this model, our algorithm identifies flows above a frequency threshold of roughly 1 √nm with high probability, where m is the number of counters and n is the number of packets observed. This guarantee is not far off from the ideal of identifying all flows (probability 1 n), and we prove that it is best possible up to a logarithmic factor. We show that the algorithm ranks the identified flows according to frequency within any desired constant factor of accuracy.", "We present a deterministic parallel algorithm for the k-majority problem, that can be used to find in parallel frequent items, i.e. those whose multiplicity is greater than a given threshold, and is therefore useful to process iceberg queries and in many other different contexts of applied mathematics and information theory. The algorithm can be used both in the online (stream) context and in the offline setting, the difference being that in the former case we are restricted to a single scan of the input elements, so that verifying the frequent items that have been determined is not allowed (e.g. network traffic streams passing through internet routers), while in the latter a parallel scan of the input can be used to determine the actual k-majority elements. To the best of our knowledge, this is the first parallel algorithm solving the proposed problem. Copyright © 2011 John Wiley & Sons, Ltd.", "" ] }
1606.04669
2436514525
Given an array @math of @math elements and a value @math , a frequent item or @math -majority element is an element occurring in @math more than @math times. The @math -majority problem requires finding all of the @math -majority elements. In this paper we deal with parallel shared-memory algorithms for frequent items; we present a shared-memory version of the Space Saving algorithm and we study its behavior with regard to accuracy and performance on many and multi-core processors, including the Intel Phi accelerator. We also investigate a hybrid MPI OpenMP version against a pure MPI based version. Through extensive experimental results we prove that the MPI OpenMP parallel version of the algorithm significantly enhances the performance of the earlier pure MPI version of the same algorithm. Results also prove that for this algorithm the Intel Phi accelerator does not introduce any improvement with respect to the Xeon octa-core processor.
Recently, we have designed and implemented in MPI a Parallel Space Saving algorithm @cite_16 for message-passing architectures. The availability of the latest Intel compilers (2017 release, v17), supporting the OpenMP v4.x specification, led us to implement a corresponding shared-memory version based on OpenMP v4. In particular, we exploit the OpenMP v4.x specification, which introduces a new user's defined reduction feature, allowing for both faster and easier porting of our previous message-passing based algorithm in the context of shared-memory architectures.
{ "cite_N": [ "@cite_16" ], "mid": [ "1752655734" ], "abstract": [ "We design a parallel algorithm for frequent items based on the sequential Space Saving algorithm.We show that the algorithm is cost-optimal for k = O ( 1 ) and therefore extremely fast and useful in a wide range of applications.We experimentally validate the algorithm on synthetic data distributed using a Hurwitz zeta function and a Zipf one, and also on real datasets, confirming the theoretical cost-optimality and that the error committed is very low and close to zero.We compare the performances and the error committed by our algorithm against a parallel algorithm recently proposed by Agarwal et?al. for merging datasets derived by the Space Saving or Frequent algorithms. We present a message-passing based parallel version of the Space Saving algorithm designed to solve the k-majority problem. The algorithm determines in parallel frequent items, i.e., those whose frequency is greater than a given threshold, and is therefore useful for iceberg queries and many other different contexts. We apply our algorithm to the detection of frequent items in both real and synthetic datasets whose probability distribution functions are a Hurwitz and a Zipf distribution respectively. Also, we compare its parallel performances and accuracy against a parallel algorithm recently proposed for merging summaries derived by the Space Saving or Frequent algorithms." ] }
1606.04586
2431080869
Effective convolutional neural networks are trained on large sets of labeled data. However, creating large labeled datasets is a very costly and time-consuming task. Semi-supervised learning uses unlabeled data to train a model with higher accuracy when there is a limited set of labeled data available. In this paper, we consider the problem of semi-supervised learning with convolutional neural networks. Techniques such as randomized data augmentation, dropout and random max-pooling provide better generalization and stability for classifiers that are trained using gradient descent. Multiple passes of an individual sample through the network might lead to different predictions due to the non-deterministic behavior of these techniques. We propose an unsupervised loss function that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network. We evaluate the proposed method on several benchmark datasets.
Another example of semi-supervised learning with ConvNets is region embedding @cite_19 , which is used for text categorization. The work in @cite_38 is also a deep semi-supervised learning method based on embedding techniques. Unlabeled video frames are also being used to train ConvNets @cite_17 @cite_20 . The target of the ConvNet is calculated based on the correlations between video frames. Another notable example is semi-supervised learning with ladder networks @cite_6 in which the sums of supervised and unsupervised loss functions are simultaneously minimized by backpropagation. In this method, a feedforward model, is assumed to be an encoder. The proposed network consists of a noisy encoder path and a clean one. A decoder is added to each layer of the noisy path. This decoder is supposed to reconstruct a clean activation of each layer. The unsupervised loss function is the difference between the output of each layer in clean path and its corresponding reconstruction from the noisy path.
{ "cite_N": [ "@cite_38", "@cite_6", "@cite_19", "@cite_20", "@cite_17" ], "mid": [ "2407712691", "830076066", "1810499140", "", "219040644" ], "abstract": [ "We show how nonlinear embedding algorithms popular for use with \"shallow\" semi-supervised learning techniques such as kernel methods can be easily applied to deep multi-layer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This trick provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.", "We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on top of the Ladder network proposed by Valpola [1] which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification in addition to permutation-invariant MNIST classification with all labels.", "This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks.", "", "Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation." ] }
1606.04586
2431080869
Effective convolutional neural networks are trained on large sets of labeled data. However, creating large labeled datasets is a very costly and time-consuming task. Semi-supervised learning uses unlabeled data to train a model with higher accuracy when there is a limited set of labeled data available. In this paper, we consider the problem of semi-supervised learning with convolutional neural networks. Techniques such as randomized data augmentation, dropout and random max-pooling provide better generalization and stability for classifiers that are trained using gradient descent. Multiple passes of an individual sample through the network might lead to different predictions due to the non-deterministic behavior of these techniques. We propose an unsupervised loss function that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network. We evaluate the proposed method on several benchmark datasets.
Another approach by @cite_34 is to take a random unlabeled sample and generate multiple instances by randomly transforming that sample multiple times. The resulting set of images forms a surrogate class. Multiple surrogate classes are produced and a ConvNet is trained on them. One disadvantage of this method is that it does not scale well with the number of unlabeled examples because a separate class is needed for every training sample during unsupervised training. In @cite_37 , the authors propose a mutual-exclusivity loss function that forces the set of predictions for a multiclass dataset to be mutually-exclusive. In other words, it forces the classifier's prediction to be close to one only for one class and zero for the others. It is shown that this loss function makes use of unlabeled data and pushes the decision boundary to a less dense area of decision space.
{ "cite_N": [ "@cite_37", "@cite_34" ], "mid": [ "2963930099", "2148349024" ], "abstract": [ "In this paper we consider the problem of semi-supervised learning with deep Convolutional Neural Networks (ConvNets). Semi-supervised learning is motivated on the observation that unlabeled data is cheap and can be used to improve the accuracy of classifiers. In this paper we propose an unsupervised regularization term that explicitly forces the classifier's prediction for multiple classes to be mutually-exclusive and effectively guides the decision boundary to lie on the low density space between the manifolds corresponding to different classes of data. Our proposed approach is general and can be used with any backpropagation-based learning method. We show through different experiments that our method can improve the object recognition performance of ConvNets using unlabeled data.", "Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101)." ] }
1606.04552
2423714735
The monitoring and management of high-volume feature-rich traffic in large networks offers significant challenges in storage, transmission and computational costs. The predominant approach to reducing these costs is based on performing a linear mapping of the data to a low-dimensional subspace such that a certain large percentage of the variance in the data is preserved in the low-dimensional representation. This variance-based subspace approach to dimensionality reduction forces a fixed choice of the number of dimensions, is not responsive to real-time shifts in observed traffic patterns, and is vulnerable to normal traffic spoofing. Based on theoretical insights proved in this paper, we propose a new distance-based approach to dimensionality reduction motivated by the fact that the real-time structural differences between the covariance matrices of the observed and the normal traffic is more relevant to anomaly detection than the structure of the training data alone. Our approach, called the distance-based subspace method, allows a different number of reduced dimensions in different time windows and arrives at only the number of dimensions necessary for effective anomaly detection. We present centralized and distributed versions of our algorithm and, using simulation on real traffic traces, demonstrate the qualitative and quantitative advantages of the distance-based subspace approach.
Developing general anomaly detection tools can be challenging, largely due to the difficulty of extracting anomalous patterns from huge volumes of high-dimensional data contaminated with anomalies. Early anomaly detection methods which used artificial intelligence, machine learning, or state machine modeling are reviewed in @cite_35 . Examples of later work on developing general anomaly detection tools include @cite_27 @cite_14 @cite_4 @cite_32 @cite_22 @cite_18 @cite_25 @cite_13 . There are two broad approaches to anomaly detection that are related to the work reported in this paper --- those using covariance matrices directly and those using subspace methods, and occasionally those which use both these approaches together.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_14", "@cite_4", "@cite_22", "@cite_32", "@cite_27", "@cite_13", "@cite_25" ], "mid": [ "2131389289", "", "2164210932", "", "2119742497", "2170210941", "", "2152426255", "2132559723" ], "abstract": [ "Network anomaly detection is a vibrant research area. Researchers have approached this problem using various techniques such as artificial intelligence, machine learning, and state machine modeling. In this paper, we first review these anomaly detection methods and then describe in detail a statistical signal processing technique based on abrupt change detection. We show that this signal processing technique is effective at detecting several network anomalies. Case studies from real network data that demonstrate the power of the signal processing approach to network anomaly detection are presented. The application of signal processing techniques to this area is still in its infancy, and we believe that it has great potential to enhance the field, and thereby improve the reliability of IP networks.", "", "The increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies. However the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet. We argue that the distributions of packet features (IP addresses and ports) observed in flow traces reveals both the presence and the structure of a wide range of anomalies. Using entropy as a summarization tool, we show that the analysis of feature distributions leads to significant advances on two fronts: (1) it enables highly sensitive detection of a wide range of anomalies, augmenting detections by volume-based methods, and (2) it enables automatic classification of anomalies via unsupervised learning. We show that using feature distributions, anomalies naturally fall into distinct and meaningful clusters. These clusters can be used to automatically classify anomalies and to uncover new anomaly types. We validate our claims on data from two backbone networks (Abilene and Geant) and conclude that feature distributions show promise as a key element of a fairly general network anomaly diagnosis framework.", "", "We introduce an Internet traffic anomaly detection mechanism based on large deviations results for empirical measures. Using past traffic traces we characterize network traffic during various time-of-day intervals, assuming that it is anomaly-free. We present two different approaches to characterize traffic: (i) a model-free approach based on the method of types and Sanov's theorem, and (ii) a model-based approach modeling traffic using a Markov modulated process. Using these characterizations as a reference we continuously monitor traffic and employ large deviations and decision theory results to ldquocomparerdquo the empirical measure of the monitored traffic with the corresponding reference characterization, thus, identifying traffic anomalies in real-time. Our experimental results show that applying our methodology (even short-lived) anomalies are identified within a small number of observations. Throughout, we compare the two approaches presenting their advantages and disadvantages to identify and classify temporal network anomalies. We also demonstrate how our framework can be used to monitor traffic from multiple network elements in order to identify both spatial and temporal anomalies. We validate our techniques by analyzing real traffic traces with time-stamped anomalies.", "During the last decade, anomaly detection has attracted the attention of many researchers to overcome the weakness of signature-based IDSs in detecting novel attacks. However, having a relatively high false alarm rate, anomaly detection has not been wildly used in real networks. In this paper, we have proposed a novel anomaly detection scheme using the correlation information contained in groups of network traffic samples. Our experimental results show promising detection rates while maintaining false positives at very low rates.", "", "The increasing number of network attacks causes growing problems for network operators and users. Thus, detecting anomalous traffic is of primary interest in IP networks management. In this paper we address the problem considering a method based on PCA for detecting network anomalies. In more detail, we present a new technique that extends the state of the art in PCA based anomaly detection. Indeed, by means of the Kullback-Leibler divergence we are able to obtain great improvements with respect to the performance of the \"classical\" approach. Moreover we also introduce a method for identifying the flows responsible for an anomaly detected at the aggregated level. The performance analysis, presented in this paper, demonstrates the effectiveness of the proposed method.", "In this work we present a novel scheme for statistical-based anomaly detection in 3G cellular networks. The traffic data collected by a passive monitoring system are reduced to a set of per-mobile user counters, from which time-series of unidimensional feature distributions are derived. An example of feature is the number of TCP SYN packets seen in uplink for each mobile user in fixed-length time bins. We design a changedetection algorithm to identify deviations in each distribution time-series. Our algorithm is designed specifically to cope with the marked non-stationarities, daily weekly seasonality and longterm trend that characterize the global traffic in a real network. The proposed scheme was applied to the analysis of a large dataset from an operational 3G network. Here we present the algorithm and report on our practical experience with the analysis of real data, highlighting the key lessons learned in the perspective of the possible adoption of our anomaly detection tool on a production basis." ] }
1606.04552
2423714735
The monitoring and management of high-volume feature-rich traffic in large networks offers significant challenges in storage, transmission and computational costs. The predominant approach to reducing these costs is based on performing a linear mapping of the data to a low-dimensional subspace such that a certain large percentage of the variance in the data is preserved in the low-dimensional representation. This variance-based subspace approach to dimensionality reduction forces a fixed choice of the number of dimensions, is not responsive to real-time shifts in observed traffic patterns, and is vulnerable to normal traffic spoofing. Based on theoretical insights proved in this paper, we propose a new distance-based approach to dimensionality reduction motivated by the fact that the real-time structural differences between the covariance matrices of the observed and the normal traffic is more relevant to anomaly detection than the structure of the training data alone. Our approach, called the distance-based subspace method, allows a different number of reduced dimensions in different time windows and arrives at only the number of dimensions necessary for effective anomaly detection. We present centralized and distributed versions of our algorithm and, using simulation on real traffic traces, demonstrate the qualitative and quantitative advantages of the distance-based subspace approach.
In other related work, PCA-based methods have been decentralized for a variety of purposes including anomaly detection @cite_21 @cite_8 @cite_19 @cite_7 @cite_0 . A distributed framework for PCA is proposed in @cite_7 to achieve accurate detection of network anomalies through monitoring of only the local data. A distributed implementation of PCA is developed for decomposable Gaussian graphical models in @cite_0 to allow decentralized anomaly detection in backbone networks. Distributed gossip algorithms using only local communication for subspace estimation have been used in the context of sensor networks @cite_26 @cite_10 @cite_23 . Our work in this paper extends the distributed average consensus protocol proposed in @cite_23 to estimate the principal subspace for the context of anomaly detection in network traffic data.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_8", "@cite_21", "@cite_0", "@cite_19", "@cite_23", "@cite_10" ], "mid": [ "2117905067", "", "2134522408", "", "2158339409", "", "2142535750", "2120286056" ], "abstract": [ "Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of \"gossip\" algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the so-called Preferential Connectivity (PC) model.", "", "In this paper, we revisit the distributed total least squares (D-TLS) algorithm, which operates in an ad hoc sensor network where each node has access to a subset of the equations of an overdetermined set of linear equations. The D-TLS algorithm computes the total least squares (TLS) solution of the full set of equations in a fully distributed fashion (without fusion center). We modify the D-TLS algorithm to eliminate the large computational complexity due to an eigenvalue decomposition (EVD) at every node and in each iteration. In the modified algorithm, a single power iteration (PI) is performed instead of a full EVD computation, which significantly reduces the computational complexity. Since the nodes then do not exchange their true eigenvectors, the theoretical convergence results of the original D-TLS algorithm do not hold anymore. Nevertheless, we find that this PI-based D-TLS algorithm still converges to the network-wide TLS solution, under certain assumptions, which are often satisfied in practice. We provide simulation results to demonstrate the convergence of the algorithm, even when some of these assumptions are not satisfied.", "", "In this paper, we consider principal component analysis (PCA) in decomposable Gaussian graphical models. We exploit the prior information in these models in order to distribute PCA computation. For this purpose, we reformulate the PCA problem in the sparse inverse covariance (concentration) domain and address the global eigenvalue problem by solving a sequence of local eigenvalue problems in each of the cliques of the decomposable graph. We illustrate our methodology in the context of decentralized anomaly detection in the Abilene backbone network. Based on the topology of the network, we propose an approximate statistical graphical model and distribute the computation of PCA.", "", "Motivated by applications in multi-sensor array detection and estimation, this paper studies the problem of tracking the principal eigenvector and the principal subspace of a signal's covariance matrix adaptively in a fully decentralized wireless sensor network (WSN). Sensor networks are traditionally designed to simply gather raw data at a fusion center, where all the processing occurs. In large deployments, this model entails high networking cost and creates a computational and storage bottleneck for the system. By leveraging both sensors' abilities to communicate and their local computational power, our objective is to propose distributed algorithms for principal eigenvector and principal subspace tracking. We show that it is possible to have each sensor estimate only the corresponding entry of the principal eigenvector or corresponding row of the p-dimensional principal subspace matrix and do so by iterating a simple computation that combines data from its network neighbors only. This paper also examines the convergence properties of the proposed principal eigenvector and principal subspace tracking algorithms analytically and by simulations.", "Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This paper presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression." ] }
1606.04646
2439973210
Unsupervised learning is the most challenging problem in machine learning and especially in deep learning. Among many scenarios, we study an unsupervised learning problem of high economic value --- learning to predict without costly pairing of input data and corresponding labels. Part of the difficulty in this problem is a lack of solid evaluation measures. In this paper, we take a practical approach to grounding unsupervised learning by using the same success criterion as for supervised learning in prediction tasks but we do not require the presence of paired input-output training data. In particular, we propose an objective function that aims to make the predicted outputs fit well the structure of the output while preserving the correlation between the input and the predicted output. We experiment with a synthetic structural prediction problem and show that even with simple linear classifiers, the objective function is already highly non-convex. We further demonstrate the nature of this non-convex optimization problem as well as potential solutions. In particular, we show that with regularization via a generative model, learning with the proposed unsupervised objective function converges to an optimal solution.
For unsupervised learning applied to prediction and related tasks, several main approaches have been taken in the past. An important line of research has been to focus on exploiting the structure of input data by learning the data distribution using maximum likelihood rule. The most successful examples in this category include the restricted Boltzmann machine (RBM) @cite_11 @cite_25 , the deep belief network @cite_28 , topic models @cite_12 , etc. The main technical challenge of these methods is the difficulty of computing the gradient of the likelihood function exactly. For this reason, various approximate methods have been developed, such as variational inference @cite_18 and Monte Carlo methods @cite_1 .
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_1", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "1516111018", "2951805548", "2138309709", "", "1880262756", "" ], "abstract": [ "This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields). We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, which exploit laws of large numbers to transform the original graphical model into a simplified graphical model in which inference is efficient. Inference in the simpified model provides bounds on probabilities of interest in the original model. We describe a general framework for generating variational transformations based on convex duality. Finally we return to the examples and demonstrate how variational algorithms can be formulated in each case.", "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.", "SUMMARY A generalization of the sampling method introduced by (1953) is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of assessing the error in Monte Carlo estimates. Examples of the methods, including the generation of random orthogonal matrices and potential applications of the methods to numerical problems arising in statistics, are discussed. For numerical problems in a large number of dimensions, Monte Carlo methods are often more efficient than conventional numerical methods. However, implementation of the Monte Carlo methods requires sampling from high dimensional probability distributions and this may be very difficult and expensive in analysis and computer time. General methods for sampling from, or estimating expectations with respect to, such distributions are as follows. (i) If possible, factorize the distribution into the product of one-dimensional conditional distributions from which samples may be obtained. (ii) Use importance sampling, which may also be used for variance reduction. That is, in order to evaluate the integral J = X) p(x)dx = Ev(f), where p(x) is a probability density function, instead of obtaining independent samples XI, ..., Xv from p(x) and using the estimate J, = Zf(xi) N, we instead obtain the sample from a distribution with density q(x) and use the estimate J2 = Y f(xj)p(x1) q(xj)N . This may be advantageous if it is easier to sample from q(x) thanp(x), but it is a difficult method to use in a large number of dimensions, since the values of the weights w(xi) = p(x1) q(xj) for reasonable values of N may all be extremely small, or a few may be extremely large. In estimating the probability of an event A, however, these difficulties may not be as serious since the only values of w(x) which are important are those for which x -A. Since the methods proposed by Trotter & Tukey (1956) for the estimation of conditional expectations require the use of importance sampling, the same difficulties may be encountered in their use. (iii) Use a simulation technique; that is, if it is difficult to sample directly from p(x) or if p(x) is unknown, sample from some distribution q(y) and obtain the sample x values as some function of the corresponding y values. If we want samples from the conditional dis", "", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.", "" ] }
1606.04596
2422843715
While end-to-end neural machine translation (NMT) has made remarkable progress recently, NMT systems only rely on parallel corpora for parameter estimation. Since parallel corpora are usually limited in quantity, quality, and coverage, especially for low-resource languages, it is appealing to exploit monolingual corpora to improve NMT. We propose a semi-supervised approach for training NMT models on the concatenation of labeled (parallel corpora) and unlabeled (monolingual corpora) data. The central idea is to reconstruct the monolingual corpora using an autoencoder, in which the source-to-target and target-to-source translation models serve as the encoder and decoder, respectively. Our approach can not only exploit the monolingual corpora of the target language, but also of the source language. Experiments on the Chinese-English dataset show that our approach achieves significant improvements over state-of-the-art SMT and NMT systems.
Exploiting monolingual corpora for conventional SMT has attracted intensive attention in recent years. Several authors have introduced transductive learning to make full use of monolingual corpora @cite_16 @cite_0 . They use an existing translation model to translate unseen source text, which can be paired with its translations to form a pseudo parallel corpus. This process iterates until convergence. While propose an approach to estimating phrase translation probabilities from monolingual corpora, directly extract parallel phrases from monolingual corpora using retrieval techniques. Another important line of research is to treat translation on monolingual corpora as a decipherment problem @cite_11 @cite_5 .
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_16", "@cite_11" ], "mid": [ "2122270629", "2251560973", "", "2121745180" ], "abstract": [ "Domain adaptation has recently gained interest in statistical machine translation to cope with the performance drop observed when testing conditions deviate from training conditions. The basic idea is that in-domain training data can be exploited to adapt all components of an already developed system. Previous work showed small performance gains by adapting from limited in-domain bilingual data. Here, we aim instead at significant performance gains by exploiting large but cheap monolingual in-domain data, either in the source or in the target language. We propose to synthesize a bilingual corpus by translating the monolingual adaptation data into the counterpart language. Investigations were conducted on a state-of-the-art phrase-based system trained on the Spanish--English part of the UN corpus, and adapted on the corresponding Europarl data. Translation, re-ordering, and language models were estimated after translating in-domain texts with the baseline. By optimizing the interpolation of these models on a development set the BLEU score was improved from 22.60 to 28.10 on a test set.", "Inspired by previous work, where decipherment is used to improve machine translation, we propose a new idea to combine word alignment and decipherment into a single learning process. We use EM to estimate the model parameters, not only to maximize the probability of parallel corpus, but also the monolingual corpus. We apply our approach to improve Malagasy-English machine translation, where only a small amount of parallel data is available. In our experiments, we observe gains of 0.9 to 2.1 Bleu over a strong baseline.", "", "In this work, we tackle the task of machine translation (MT) without parallel training data. We frame the MT problem as a decipherment task, treating the foreign text as a cipher for English and present novel methods for training translation models from non-parallel text." ] }
1606.04456
2336207029
Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using live data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating predictive models for node failures. Our results support the practicality of a data-driven approach by showing the effectiveness of predictive models based on data found in typical data center logs. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing node state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if nodes will fail in a future 24-h window. Our evaluation reveals that if we limit false positive rates to 5 , we can achieve true positive rates between 27 and 88 with precision varying between 50 and 72 . This level of performance allows us to recover large fraction of jobs' executions (by redirecting them to other nodes when a failure of the present node is predicted) that would otherwise have been wasted due to failures. We discuss the feasibility of including our predictive model as the central component of a data-driven autonomic manager and operating it on-line with live data streams (rather than off-line on data logs). All of the scripts used for BigQuery and classification analyses are publicly available on GitHub.
The publication of the Google trace data has triggered a flurry of activity within the community including several with goals that are related to ours. Some of these provide general characterization and statistics about the workload and node state for the cluster @cite_24 @cite_29 @cite_27 and identify high levels of heterogeneity and dynamism in the system, especially when compared to grid workloads @cite_0 . User profiles @cite_5 and task usage shapes @cite_6 have also been characterized for this cluster. Other studies have applied clustering techniques for workload characterization, either in terms of jobs and resources @cite_7 @cite_16 or placement constraints @cite_12 , with the aim to synthesize new traces. Wasted resources due to the priority-based eviction mechanism were evaluated in @cite_1 , were significant resources were shown to be used by tasks that do not complete successfully, as we have also seen in our analysis of tasks interrupted by failures.
{ "cite_N": [ "@cite_7", "@cite_29", "@cite_1", "@cite_6", "@cite_24", "@cite_0", "@cite_27", "@cite_5", "@cite_16", "@cite_12" ], "mid": [ "2111556044", "", "2215315572", "2182419557", "2129542763", "2136510202", "2060331550", "2034467200", "2143492785", "2028617807" ], "abstract": [ "The advent of cloud computing promises highly available, efficient, and flexible computing services for applications such as web search, email, voice over IP, and web search alerts. Our experience at Google is that realizing the promises of cloud computing requires an extremely scalable backend consisting of many large compute clusters that are shared by application tasks with diverse service level requirements for throughput, latency, and jitter. These considerations impact (a) capacity planning to determine which machine resources must grow and by how much and (b) task scheduling to achieve high machine utilization and to meet service level objectives. Both capacity planning and task scheduling require a good understanding of task resource consumption (e.g., CPU and memory usage). This in turn demands simple and accurate approaches to workload classification-determining how to form groups of tasks (workloads) with similar resource demands. One approach to workload classification is to make each task its own workload. However, this approach scales poorly since tens of thousands of tasks execute daily on Google compute clusters. Another approach to workload classification is to view all tasks as belonging to a single workload. Unfortunately, applying such a coarse-grain workload classification to the diversity of tasks running on Google compute clusters results in large variances in predicted resource consumptions. This paper describes an approach to workload classification and its application to the Google Cloud Backend, arguably the largest cloud backend on the planet. Our methodology for workload classification consists of: (1) identifying the workload dimensions; (2) constructing task classes using an off-the-shelf algorithm such as k-means; (3) determining the break points for qualitative coordinates within the workload dimensions; and (4) merging adjacent task classes to reduce the number of workloads. We use the foregoing, especially the notion of qualitative coordinates, to glean several insights about the Google Cloud Backend: (a) the duration of task executions is bimodal in that tasks either have a short duration or a long duration; (b) most tasks have short durations; and (c) most resources are consumed by a few tasks with long duration that have large demands for CPU and memory.", "", "The ever increasing size and complexity of large-scale datacenters enhance the difficulty of developing efficient scheduling policies for big data systems, where priority scheduling is often employed to guarantee the allocation of system resources to high priority tasks, at the cost of task preemption and resulting resource waste. A large number of related studies focuses on understanding workloads and their performance impact on such systems; nevertheless, existing works pay little attention on evicted tasks, their characteristics, and the resulting impairment on the system performance. In this paper, we base our analysis on Google cluster traces, where tasks can experience three diffierent types of unsuccessful events, namely eviction, kill and fail. We particularly focus on eviction events, i.e., preemption of task execution due to higher priority tasks, and rigorously quantify their performance drawbacks, in terms of wasted machine time and resources, with particular focus on priority. Motivated by the high dependency of eviction on underlying scheduling policies, we also study its statistical patterns and its dependency on other types of unsuccessful events. Moreover, by considering co-executed tasks and system load, we deepen the knowledge on priority scheduling, showing how priority and machine utilization affect the eviction process and related tasks.", "The increase in scale and complexity of large compute clusters motivates a need for representative workload benchmarks to evaluate the performance impact of system changes, so as to assist in designing better scheduling algorithms and in carrying out management activities. To achieve this goal, it is necessary to construct workload characterizations from which realistic performance benchmarks can be created. In this paper, we focus on characterizing run-time task resource usage for CPU, memory and disk. The goal is to find an accurate characterization that can faithfully reproduce the performance of historical workload traces in terms of key performance metrics, such as task wait time and machine resource utilization. Through experiments using workload traces from Google production clusters, we find that simply using the mean of task usage can generate synthetic workload traces that accurately reproduce resource utilizations and task waiting time. This seemingly surprising result can be justified by the fact that resource usage for CPU, memory and disk are relatively stable over time for the majority of the tasks. Our work not only presents a simple technique for constructing realistic workload benchmarks, but also provides insights into understanding workload performance in production compute clusters.", "To better understand the challenges in developing effective cloud-based resource schedulers, we analyze the first publicly available trace data from a sizable multi-purpose cluster. The most notable workload characteristic is heterogeneity: in resource types (e.g., cores:RAM per machine) and their usage (e.g., duration and resources needed). Such heterogeneity reduces the effectiveness of traditional slot- and core-based scheduling. Furthermore, some tasks are constrained as to the kind of machine types they can use, increasing the complexity of resource assignment and complicating task migration. The workload is also highly dynamic, varying over time and most workload features, and is driven by many short jobs that demand quick scheduling decisions. While few simplifying assumptions apply, we find that many longer-running jobs have relatively stable resource utilizations, which can help adaptive resource schedulers.", "A new era of Cloud Computing has emerged, but the characteristics of Cloud load in data centers is not perfectly clear. Yet this characterization is critical for the design of novel Cloud job and resource management systems. In this paper, we comprehensively characterize the job task load and host load in a real-world production data center at Google Inc. We use a detailed trace of over 25 million tasks across over 12,500 hosts. We study the differences between a Google data center and other Grid HPC systems, from the perspective of both work load (w.r.t. jobs and tasks) and host load (w.r.t. machines). In particular, we study the job length, job submission frequency, and the resource utilization of jobs in the different systems, and also investigate valuable statistics of machine's maximum load, queue state and relative usage levels, with different job priorities and resource attributes. We find that the Google data center exhibits finer resource allocation with respect to CPU and memory than that of Grid HPC systems. Google jobs are always submitted with much higher frequency and they are much shorter than Grid jobs. As such, Google host load exhibits higher variance and noise.", "Cloud computing offers high scalability, flexibility and cost-effectiveness to meet emerging computing requirements. Understanding the characteristics of real workloads on a large production cloud cluster benefits not only cloud service providers but also researchers and daily users. This paper studies a large-scale Google cluster usage trace dataset and characterizes how the machines in the cluster are managed and the workloads submitted during a 29-day period behave. We focus on the frequency and pattern of machine maintenance events, job- and task-level workload behavior, and how the overall cluster resources are utilized.", "In the era of cloud computing, users encounter the challenging task of effectively composing and running their applications on the cloud. In an attempt to understand user behavior in constructing applications and interacting with typical cloud infrastructures, we analyzed a large utilization dataset of Google cluster. In the present paper, we consider user behaviorin composing applications from the perspective of topology, maximum requested computational resources, and workload type. We model user dynamic behavior around the user's session view. Mass-Count disparity metrics are used to investigate the characteristics of underlying statistical models and to characterize users into distinct groups according to their composition and behavioral classes and patterns. The present study reveals interesting insight into the heterogeneous structure of the Google cloud workload.", "Designing cloud computing setups is a challenging task. It involves understanding the impact of a plethora of parameters ranging from cluster configuration, partitioning, networking characteristics, and the targeted applications' behavior. The design space, and the scale of the clusters, make it cumbersome and error-prone to test different cluster configurations using real setups. Thus, the community is increasingly relying on simulations and models of cloud setups to infer system behavior and the impact of design choices. The accuracy of the results from such approaches depends on the accuracy and realistic nature of the workload traces employed. Unfortunately, few cloud workload traces are available (in the public domain). In this paper, we present the key steps towards analyzing the traces that have been made public, e.g., from Google, and inferring lessons that can be used to design realistic cloud workloads as well as enable thorough quantitative studies of Hadoop design. Moreover, we leverage the lessons learned from the traces to undertake two case studies: (i) Evaluating Hadoop job schedulers, and (ii) Quantifying the impact of shared storage on Hadoop system performance.", "Evaluating the performance of large compute clusters requires benchmarks with representative workloads. At Google, performance benchmarks are used to obtain performance metrics such as task scheduling delays and machine resource utilizations to assess changes in application codes, machine configurations, and scheduling algorithms. Existing approaches to workload characterization for high performance computing and grids focus on task resource requirements for CPU, memory, disk, I O, network, etc. Such resource requirements address how much resource is consumed by a task. However, in addition to resource requirements, Google workloads commonly include task placement constraints that determine which machine resources are consumed by tasks. Task placement constraints arise because of task dependencies such as those related to hardware architecture and kernel version. This paper develops methodologies for incorporating task placement constraints and machine properties into performance benchmarks of large compute clusters. Our studies of Google compute clusters show that constraints increase average task scheduling delays by a factor of 2 to 6, which often results in tens of minutes of additional task wait time. To understand why, we extend the concept of resource utilization to include constraints by introducing a new metric, the Utilization Multiplier (UM). UM is the ratio of the resource utilization seen by tasks with a constraint to the average utilization of the resource. UM provides a simple model of the performance impact of constraints in that task scheduling delays increase with UM. Last, we describe how to synthesize representative task constraints and machine properties, and how to incorporate this synthesis into existing performance benchmarks. Using synthetic task constraints and machine properties generated by our methodology, we accurately reproduce performance metrics for benchmarks of Google compute clusters with a discrepancy of only 13 in task scheduling delay and 5 in resource utilization." ] }
1606.04456
2336207029
Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using live data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating predictive models for node failures. Our results support the practicality of a data-driven approach by showing the effectiveness of predictive models based on data found in typical data center logs. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing node state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if nodes will fail in a future 24-h window. Our evaluation reveals that if we limit false positive rates to 5 , we can achieve true positive rates between 27 and 88 with precision varying between 50 and 72 . This level of performance allows us to recover large fraction of jobs' executions (by redirecting them to other nodes when a failure of the present node is predicted) that would otherwise have been wasted due to failures. We discuss the feasibility of including our predictive model as the central component of a data-driven autonomic manager and operating it on-line with live data streams (rather than off-line on data logs). All of the scripts used for BigQuery and classification analyses are publicly available on GitHub.
A different class of studies address validation of various workload management algorithms. Examples include @cite_31 where the trace is used to evaluate consolidation strategies, @cite_10 @cite_2 where over-committing (overbooking) is validated, @cite_25 that takes heterogeneity into account to perform provisioning or @cite_18 investigating checkpointing algorithms.
{ "cite_N": [ "@cite_18", "@cite_2", "@cite_31", "@cite_10", "@cite_25" ], "mid": [ "1966771895", "2002566704", "2052179907", "2031679758", "2072362295" ], "abstract": [ "In this paper, we aim at optimizing fault-tolerance techniques based on a checkpointing restart mechanism, in the context of cloud computing. Our contribution is three-fold. (1) We derive a fresh formula to compute the optimal number of checkpoints for cloud jobs with varied distributions of failure events. Our analysis is not only generic with no assumption on failure probability distribution, but also attractively simple to apply in practice. (2) We design an adaptive algorithm to optimize the impact of checkpointing regarding various costs like checkpointing restart overhead. (3) We evaluate our optimized solution in a real cluster environment with hundreds of virtual machines and Berkeley Lab Checkpoint Restart tool. Task failure events are emulated via a production trace produced on a large-scale Google data center. Experiments confirm that our solution is fairly suitable for Google systems. Our optimized formula outperforms Young's formula by 3-10 percent, reducing wall-clock lengths by 50-100 seconds per job on average.", "One of the key enablers of a cloud provider competitiveness is ability to over-commit shared infrastructure at ratios that are higher than those of other competitors, without compromising non-functional requirements, such as performance. A widely recognized impediment to achieving this goal is so called \"Virtual Machines sprawl\", a phenomenon referring to the situation when customers order Virtual Machines (VM) on the cloud, use them extensively and then leave them inactive for prolonged periods of time. Since a typical cloud provisioning system treats new VM provision requests according to the nominal virtual hardware specification, an often occurring situation is that the nominal resources of a cloud pool become exhausted fast while the physical hosts utilization remains low.We present a novel cloud resources scheduler called Pulsar that extends OpenStack Nova Filter Scheduler. The key design principle of Pulsar is adaptivity. It recognises that effective safely attainable over-commit ratio varies with time due to workloads' variability and dynamically adapts the effective over-commit ratio to these changes. We evaluate Pulsar via extensive simulations and demonstrate its performance on the actual OpenStack based testbed running popular workloads.", "Cloud providers aim to provide computing services for a wide range of applications, such as web applications, emails, web searches, map reduce jobs. These applications are commonly scheduled to run on multi-purpose clusters that nowadays are becoming larger and more heterogeneous. A major challenge is to efficiently utilize the cluster's available resources, in particular to maximize the machines' utilization level while minimizing the applications' waiting time. We studied a publicly available trace from a large Google cluster (i12,000 machines) and observed that users generally request more resources than required for running their tasks, leading to low levels of utilization. In this paper, we propose a methodology for achieving an efficient utilization of the cluster's resources while providing the users with fast and reliable computing services. The methodology consists of three main modules: i) a prediction module that forecasts the maximum resource requirement of a task, ii) a scalable scheduling module that efficiently allocates tasks to machines, and iii) a monitoring module that tracks the levels of utilization of the machines and tasks. We present results that show that the impact of more accurate resource estimations for the scheduling of tasks can lead to an increase in the average utilization of the cluster, a reduction in the number of tasks being evicted, and a reduction in the tasks' waiting time.", "Cloud service providers (CSPs) often overbook their resources with user applications despite having to maintain service-level agreements with their customers. Overbooking is attractive to CSPs because it helps to reduce power consumption in the data center by packing more user jobs in less number of resources while improving their profits. Overbooking becomes feasible because user applications tend to overestimate their resource requirements utilizing only a fraction of the allocated resources. Arbitrary resource overbooking ratios, however, may be detrimental to soft real-time applications, such as airline reservations or Netflix video streaming, which are increasingly hosted in the cloud. The changing dynamics of the cloud preclude an offline determination of overbooking ratios. To address these concerns, this paper presents iOverbook, which uses a machine learning approach to make systematic and online determination of overbooking ratios such that the quality of service needs of soft real-time systems can be met while still benefiting from overbooking. Specifically, iOverbook utilizes historic data of tasks and host machines in the cloud to extract their resource usage patterns and predict future resource usage along with the expected mean performance of host machines. To evaluate our approach, we have used a large usage trace made available by Google of one of its production data centers. In the context of the traces, our experiments show that iOverbook can help CSPs improve their resource utilization by an average of 12.5 and save 32 power in the data center.", "Data centers consume tremendous amounts of energy in terms of power distribution and cooling. Dynamic capacity provisioning is a promising approach for reducing energy consumption by dynamically adjusting the number of active machines to match resource demands. However, despite extensive studies of the problem, existing solutions have not fully considered the heterogeneity of both workload and machine hardware found in production environments. In particular, production data centers often comprise heterogeneous machines with different capacities and energy consumption characteristics. Meanwhile, the production cloud workloads typically consist of diverse applications with different priorities, performance and resource requirements. Failure to consider the heterogeneity of both machines and workloads will lead to both sub-optimal energy-savings and long scheduling delays, due to incompatibility between workload requirements and the resources offered by the provisioned machines. To address this limitation, we present Harmony, a Heterogeneity-Aware dynamic capacity provisioning scheme for cloud data centers. Specifically, we first use the K-means clustering algorithm to divide workload into distinct task classes with similar characteristics in terms of resource and performance requirements. Then we present a technique that dynamically adjusting the number of machines to minimize total energy consumption and scheduling delay. Simulations using traces from a Google's compute cluster demonstrate Harmony can reduce energy by 28 percent compared to heterogeneity-oblivious solutions." ] }
1606.04456
2336207029
Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using live data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating predictive models for node failures. Our results support the practicality of a data-driven approach by showing the effectiveness of predictive models based on data found in typical data center logs. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing node state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if nodes will fail in a future 24-h window. Our evaluation reveals that if we limit false positive rates to 5 , we can achieve true positive rates between 27 and 88 with precision varying between 50 and 72 . This level of performance allows us to recover large fraction of jobs' executions (by redirecting them to other nodes when a failure of the present node is predicted) that would otherwise have been wasted due to failures. We discuss the feasibility of including our predictive model as the central component of a data-driven autonomic manager and operating it on-line with live data streams (rather than off-line on data logs). All of the scripts used for BigQuery and classification analyses are publicly available on GitHub.
System modeling and prediction studies using the Google trace data are far fewer than those aimed at characterization or validation. An early attempt at system modeling based on this trace validates an event-based simulator using workload parameters extracted from the data, with good performance in simulating job status and overall system load @cite_11 @cite_4 . Host load prediction using a Bayesian classifier was analyzed in @cite_8 . Using CPU and RAM history, the mean load in a future time window is predicted by dividing possible load levels into 50 discrete states. Job failure prediction and mitigation is attempted in @cite_30 , with 27 Failure prediction in general has been an active research area, with a comprehensive review @cite_9 summarizing several methods of failure prediction in single machines, clusters, application servers, file systems, hard drives, email servers and clients, etc., dividing them into failure tracking, symptom monitoring or error reporting. The method introduced here falls into the symptom monitoring category, however elements of failure tracking and error reporting are also present through features like number of recent failures and job failure events.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_8", "@cite_9", "@cite_11" ], "mid": [ "1555569599", "2204311365", "2075168211", "2164463086", "2949427583" ], "abstract": [ "In large-scale data enters, software and hardware failures are frequent, resulting in failures of job executions that may cause significant resource waste and performance deterioration. To proactively minimize the resource inefficiency due to job failures, it is important to identify them in advance using key job attributes. However, so far, prevailing research on datacenter workload characterization has overlooked job failures, including their patterns, root causes, and impact. In this paper, we aim to develop prediction models and mitigation policies for unsuccessful jobs, so as to reduce the resource waste in big data enters. In particular, we base our analysis on Google cluster traces, consisting of a large number of big-data jobs with a high task fan-out. We first identify the time-varying patterns of failed jobs and the contributing system features. Based on our characterization study, we develop an on-line predictive model for job failures by applying various statistical learning techniques, namely Linear Discriminate Analysis (LDA), Quadratic Discriminate Analysis (QDA), and Logistic Regression (LR). Furthermore, we propose a delay-based mitigation policy which, after a certain grace period, proactively terminates the execution of jobs that are predicted to fail. The particular objective of postponing job terminations is to strike a good tradeoffs between resource waste and false prediction of successful jobs. Our evaluation results show that the proposed method is able to significantly reduce the resource waste by 41.9 on average, and keep false terminations of jobs low, i.e., only 1 .", "Current generation of Internet-based services are typically hosted on large data centers that take the form of warehouse-size structures housing tens of thousands of servers. Continued availability of a modern data center is the result of a complex orchestration among many internal and external actors including computing hardware, multiple layers of intricate software, networking and storage devices, electrical power and cooling plants. During the course of their operation, many of these components produce large amounts of data in the form of event and error logs that are essential not only for identifying and resolving problems but also for improving data center efficiency and management. Most of these activities would benefit significantly from data analytics techniques to exploit hidden statistical patterns and correlations that may be present in the data. The sheer volume of data to be analyzed makes uncovering these correlations and patterns a challenging task. This paper presents Big Data analyzer (BiDAl), a prototype Java tool for log-data analysis that incorporates several Big Data technologies in order to simplify the task of extracting information from data traces produced by large clusters and server farms. BiDAl provides the user with several analysis languages (SQL, R and Hadoop MapReduce) and storage backends (HDFS and SQLite) that can be freely mixed and matched so that a custom tool for a specific task can be easily constructed. BiDAl has a modular architecture so that it can be extended with other backends and analysis languages in the future. In this paper we present the design of BiDAl and describe our experience using it to analyze publicly-available traces from Google data clusters, with the goal of building a realistic model of a complex data center.", "Prediction of host load in Cloud systems is critical for achieving service-level agreements. However, accurate prediction of host load in Clouds is extremely challenging because it fluctuates drastically at small timescales. We design a prediction method based on Bayes model to predict the mean load over a long-term time interval, as well as the mean load in consecutive future time intervals. We identify novel predictive features of host load that capture the expectation, predictability, trends and patterns of host load. We also determine the most effective combinations of these features for prediction. We evaluate our method using a detailed one-month trace of a Google data center with thousands of machines. Experiments show that the Bayes method achieves high accuracy with a mean squared error of 0.0014. Moreover, the Bayes method improves the load prediction accuracy by 5.6 -- 50 compared to other state-of-the-art methods based on moving averages, auto-regression, and or noise filters.", "With the ever-growing complexity and dynamicity of computer systems, proactive fault management is an effective approach to enhancing availability. Online failure prediction is the key to such techniques. In contrast to classical reliability methods, online failure prediction is based on runtime monitoring and a variety of models and methods that use the current state of a system and, frequently, the past experience as well. This survey describes these methods. To capture the wide spectrum of approaches concerning this area, a taxonomy has been developed, whose different approaches are explained and major concepts are described in detail.", "Modern data centers that provide Internet-scale services are stadium-size structures housing tens of thousands of heterogeneous devices (server clusters, networking equipment, power and cooling infrastructures) that must operate continuously and reliably. As part of their operation, these devices produce large amounts of data in the form of event and error logs that are essential not only for identifying problems but also for improving data center efficiency and management. These activities employ data analytics and often exploit hidden statistical patterns and correlations among different factors present in the data. Uncovering these patterns and correlations is challenging due to the sheer volume of data to be analyzed. This paper presents BiDAl, a prototype \"log-data analysis framework\" that incorporates various Big Data technologies to simplify the analysis of data traces from large clusters. BiDAl is written in Java with a modular and extensible architecture so that different storage backends (currently, HDFS and SQLite are supported), as well as different analysis languages (current implementation supports SQL, R and Hadoop MapReduce) can be easily selected as appropriate. We present the design of BiDAl and describe our experience using it to analyze several public traces of Google data clusters for building a simulation model capable of reproducing observed behavior." ] }
1606.04404
2432402544
Person re-identification across disjoint camera views has been widely applied in video surveillance yet it is still a challenging problem. One of the major challenges lies in the lack of spatial and temporal cues, which makes it difficult to deal with large variations of lighting conditions, viewing angles, body poses, and occlusions. Recently, several deep-learning-based person re-identification approaches have been proposed and achieved remarkable performance. However, most of those approaches extract discriminative features from the whole frame at one glimpse without differentiating various parts of the persons to identify. It is essentially important to examine multiple highly discriminative local regions of the person images in details through multiple glimpses for dealing with the large appearance variance. In this paper, we propose a new soft attention-based model, i.e. , the end-to-end comparative attention network (CAN), specifically tailored for the task of person re-identification. The end-to-end CAN learns to selectively focus on parts of pairs of person images after taking a few glimpses of them and adaptively comparing their appearance. The CAN model is able to learn which parts of images are relevant for discerning persons and automatically integrates information from different parts to determine whether a pair of images belongs to the same person. In other words, our proposed CAN model simulates the human perception process to verify whether two images are from the same person. Extensive experiments on four benchmark person re-identification data sets, including CUHK01, CHUHK03, Market-1501, and VIPeR, clearly demonstrate that our proposed end-to-end CAN for person re-identification outperforms well established baselines significantly and offer the new state-of-the-art performance.
Typically, extracting features from input images and seeking a metric for comparing these features across images are two main components of person re-identification. The basic thought of searching for better feature representation is to find features that are partially invariant to lighting, pose, and viewpoint changes. A part of existing methods primarily employ hand crafted features such as color and texture histograms. Some studies have obtained more discriminative and robust feature representation, such as Symmetry-Driven Accumulation of Local Features (SDALF) @cite_56 exploiting both symmetry and asymmetry color and texure information. Local Maximal Occurrence (LOMO) @cite_67 analyzes the horizontal occurrence of local features, and maximizes the occurrence to make a stable representation against viewpoint changes. In @cite_15 , the authors proposed the reference descriptors (RDs) generated with the reference set to improve the matching rate. To utilize complementary information from different feature descriptors, a multiple hypergraph fusion (multi-HG) method was proposed in @cite_69 to learn multiple feature descriptors. In @cite_48 , a ranking method fusing the dense invariant features (DIF) was proposed to model the relationship between an image pair across different camera views.
{ "cite_N": [ "@cite_67", "@cite_69", "@cite_48", "@cite_56", "@cite_15" ], "mid": [ "1949591461", "2515904052", "2335817359", "1979260620", "2326772079" ], "abstract": [ "Person re-identification is an important technique towards automatic search of a person's presence in a surveillance video. Two fundamental problems are critical for person re-identification, feature representation and metric learning. An effective feature representation should be robust to illumination and viewpoint changes, and a discriminant metric should be learned to match various person images. In this paper, we propose an effective feature representation called Local Maximal Occurrence (LOMO), and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA). The LOMO feature analyzes the horizontal occurrence of local features, and maximizes the occurrence to make a stable representation against viewpoint changes. Besides, to handle illumination variations, we apply the Retinex transform and a scale invariant texture operator. To learn a discriminant metric, we propose to learn a discriminant low dimensional subspace by cross-view quadratic discriminant analysis, and simultaneously, a QDA metric is learned on the derived subspace. We also present a practical computation method for XQDA, as well as its regularization. Experiments on four challenging person re-identification databases, VIPeR, QMUL GRID, CUHK Campus, and CUHK03, show that the proposed method improves the state-of-the-art rank-1 identification rates by 2.2 , 4.88 , 28.91 , and 31.55 on the four databases, respectively.", "Matching people across nonoverlapping cameras, also known as person re-identification, is an important and challenging research topic. Despite its great demand in many crucial applications such as surveillance, person re-identification is still far from being solved. Due to drastic view changes, even the same person may look quite dissimilar in different cameras. Illumination and pose variations further aggravate this discrepancy. To this end, various feature descriptors have been designed for improving the matching accuracy. Since different features encode information from different aspects, in this paper, we propose to effectively leverage multiple off-the-shelf features via multi-hypergraph fusion. A hypergraph captures not only pairwise but also high-order relationships among the subjects being matched. In addition, different from conventional approaches in which the matching is achieved by computing the pairwise distance or similarity between a probe and a gallery subject, the similarities between the probe and all gallery subjects are learned jointly via hypergraph optimization. Experiments on popular data sets demonstrate the effectiveness of the proposed method, and a superior performance is achieved as compared with the most recent state-of-the-arts.", "Recently, support vector ranking has been adopted to address the challenging person re-identification problem. However, the ranking model based on ordinary global features cannot well represent the significant variation of pose and viewpoint across camera views. To address this issue, a novel ranking method which fuses the dense invariant features is proposed in this paper to model the variation of images across camera views. An optimal space for ranking is learned by simultaneously maximizing the margin and minimizing the error on the fused features. The proposed method significantly outperforms the original support vector ranking algorithm due to the invariance of the dense invariant features, the fusion of the bidirectional features and the adaptive adjustment of parameters. Experimental results demonstrate that the proposed method is competitive with state-of-the-art methods on two challenging datasets, showing its potential for real-world person re-identification.", "In this paper, we present an appearance-based method for person re-identification. It consists in the extraction of features that model three complementary aspects of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. All this information is derived from different body parts, and weighted opportunely by exploiting symmetry and asymmetry perceptual principles. In this way, robustness against very low resolution, occlusions and pose, viewpoint and illumination changes is achieved. The approach applies to situations where the number of candidates varies continuously, considering single images or bunch of frames for each individual. It has been tested on several public benchmark datasets (ViPER, iLIDS, ETHZ), gaining new state-of-the-art performances.", "Person identification across nonoverlapping cameras, also known as person reidentification, aims to match people at different times and locations. Reidentifying people is of great importance in crucial applications such as wide-area surveillance and visual tracking. Due to the appearance variations in pose, illumination, and occlusion in different camera views, person reidentification is inherently difficult. To address these challenges, a reference-based method is proposed for person reidentification across different cameras. Instead of directly matching people by their appearance, the matching is conducted in a reference space where the descriptor for a person is translated from the original color or texture descriptors to similarity measures between this person and the exemplars in the reference set. A subspace is first learned in which the correlations of the reference data from different cameras are maximized using regularized canonical correlation analysis (RCCA). For reidentification, the gallery data and the probe data are projected onto this RCCA subspace and the reference descriptors (RDs) of the gallery and probe are generated by computing the similarity between them and the reference data. The identity of a probe is determined by comparing the RD of the probe and the RDs of the gallery. A reranking step is added to further improve the results using a saliency-based matching scheme. Experiments on publicly available datasets show that the proposed method outperforms most of the state-of-the-art approaches." ] }
1606.04404
2432402544
Person re-identification across disjoint camera views has been widely applied in video surveillance yet it is still a challenging problem. One of the major challenges lies in the lack of spatial and temporal cues, which makes it difficult to deal with large variations of lighting conditions, viewing angles, body poses, and occlusions. Recently, several deep-learning-based person re-identification approaches have been proposed and achieved remarkable performance. However, most of those approaches extract discriminative features from the whole frame at one glimpse without differentiating various parts of the persons to identify. It is essentially important to examine multiple highly discriminative local regions of the person images in details through multiple glimpses for dealing with the large appearance variance. In this paper, we propose a new soft attention-based model, i.e. , the end-to-end comparative attention network (CAN), specifically tailored for the task of person re-identification. The end-to-end CAN learns to selectively focus on parts of pairs of person images after taking a few glimpses of them and adaptively comparing their appearance. The CAN model is able to learn which parts of images are relevant for discerning persons and automatically integrates information from different parts to determine whether a pair of images belongs to the same person. In other words, our proposed CAN model simulates the human perception process to verify whether two images are from the same person. Extensive experiments on four benchmark person re-identification data sets, including CUHK01, CHUHK03, Market-1501, and VIPeR, clearly demonstrate that our proposed end-to-end CAN for person re-identification outperforms well established baselines significantly and offer the new state-of-the-art performance.
Additionally, some saliency-related methods @cite_18 , @cite_63 have been proposed to enhance the ability of representation and discrimination of the feature for person re-identification. In @cite_18 , the authors presented a method of adjacency constrained patch matching to build dense correspondence between image pairs in an unsupervised way. Moreover, an approach called SalMatch @cite_63 integrated both salience matching and patch matching based on the RankSVM framework. And mid-level filters (MidLevel) @cite_45 was learned from patch clusters with coherent appearance obtained by pruning hierarchical clustering trees to get view-invariant and discriminative features. However, the above methods pre-extract the low-level or mid-level features of pre-defined local regions and then generate saliency maps. The feature extraction and saliency map generation are two separate processes, which would affect the person re-identification performance. Compared with the mentioned saliency-based methods, our attention-based CAN model is also a kind of saliency method. However, it is able to automatically learn the attention maps from raw person images in an end-to-end way.
{ "cite_N": [ "@cite_18", "@cite_45", "@cite_63" ], "mid": [ "2046835352", "2079972027", "2125889200" ], "abstract": [ "Human eyes can recognize person identities based on some small salient regions. However, such valuable salient information is often hidden when computing similarities of images with existing approaches. Moreover, many existing approaches learn discriminative features and handle drastic viewpoint change in a supervised way and require labeling new training data for a different pair of camera views. In this paper, we propose a novel perspective for person re-identification based on unsupervised salience learning. Distinctive features are extracted without requiring identity labels in the training procedure. First, we apply adjacency constrained patch matching to build dense correspondence between image pairs, which shows effectiveness in handling misalignment caused by large viewpoint and pose variations. Second, we learn human salience in an unsupervised manner. To improve the performance of person re-identification, human salience is incorporated in patch matching to find reliable and discriminative matched patches. The effectiveness of our approach is validated on the widely used VIPeR dataset and ETHZ dataset.", "In this paper, we propose a novel approach of learning mid-level filters from automatically discovered patch clusters for person re-identification. It is well motivated by our study on what are good filters for person re-identification. Our mid-level filters are discriminatively learned for identifying specific visual patterns and distinguishing persons, and have good cross-view invariance. First, local patches are qualitatively measured and classified with their discriminative power. Discriminative and representative patches are collected for filter learning. Second, patch clusters with coherent appearance are obtained by pruning hierarchical clustering trees, and a simple but effective cross-view training strategy is proposed to learn filters that are view-invariant and discriminative. Third, filter responses are integrated with patch matching scores in RankSVM training. The effectiveness of our approach is validated on the VIPeR dataset and the CUHK01 dataset. The learned mid-level features are complementary to existing handcrafted low-level features, and improve the best Rank-1 matching rate on the VIPeR dataset by 14 .", "Human salience is distinctive and reliable information in matching pedestrians across disjoint camera views. In this paper, we exploit the pair wise salience distribution relationship between pedestrian images, and solve the person re-identification problem by proposing a salience matching strategy. To handle the misalignment problem in pedestrian images, patch matching is adopted and patch salience is estimated. Matching patches with inconsistent salience brings penalty. Images of the same person are recognized by minimizing the salience matching cost. Furthermore, our salience matching is tightly integrated with patch matching in a unified structural Rank SVM learning framework. The effectiveness of our approach is validated on the VIPeR dataset and the CUHK Campus dataset. It outperforms the state-of-the-art methods on both datasets." ] }
1606.04404
2432402544
Person re-identification across disjoint camera views has been widely applied in video surveillance yet it is still a challenging problem. One of the major challenges lies in the lack of spatial and temporal cues, which makes it difficult to deal with large variations of lighting conditions, viewing angles, body poses, and occlusions. Recently, several deep-learning-based person re-identification approaches have been proposed and achieved remarkable performance. However, most of those approaches extract discriminative features from the whole frame at one glimpse without differentiating various parts of the persons to identify. It is essentially important to examine multiple highly discriminative local regions of the person images in details through multiple glimpses for dealing with the large appearance variance. In this paper, we propose a new soft attention-based model, i.e. , the end-to-end comparative attention network (CAN), specifically tailored for the task of person re-identification. The end-to-end CAN learns to selectively focus on parts of pairs of person images after taking a few glimpses of them and adaptively comparing their appearance. The CAN model is able to learn which parts of images are relevant for discerning persons and automatically integrates information from different parts to determine whether a pair of images belongs to the same person. In other words, our proposed CAN model simulates the human perception process to verify whether two images are from the same person. Extensive experiments on four benchmark person re-identification data sets, including CUHK01, CHUHK03, Market-1501, and VIPeR, clearly demonstrate that our proposed end-to-end CAN for person re-identification outperforms well established baselines significantly and offer the new state-of-the-art performance.
Despite the hand-crafted features based methods aforementioned, there are several deep learning based person re-identification approaches proposed @cite_34 @cite_64 @cite_72 @cite_7 @cite_38 @cite_73 . @cite_41 proposed a method learning a Filter Pairing Neural Network (FPNN) to encode and model photometric transforms by using the patch matching layers to match the filter responses of local across-view patches for person re-identification. @cite_19 , a Siamese CNN, which is connected by a cosine layer, jointly learns the color feature, texture feature and metric in a unified framework. Moreover, @cite_2 improved the performance by increasing the depth of layers and using very small convolution filters. In @cite_10 @cite_70 , the authors proposed parts-based CNN model to learn the discriminative representations. Different from the part-based CNN methods @cite_19 @cite_34 @cite_38 @cite_73 , our method is able to learn the local discriminative regions rather than pre-defining or splitting the local parts.
{ "cite_N": [ "@cite_38", "@cite_64", "@cite_7", "@cite_73", "@cite_41", "@cite_70", "@cite_19", "@cite_72", "@cite_2", "@cite_34", "@cite_10" ], "mid": [ "", "", "", "", "1982925187", "", "2135442311", "", "2259687230", "1928419358", "2467139031" ], "abstract": [ "", "", "", "", "Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.", "", "Various hand-crafted features and metric learning methods prevail in the field of person re-identification. Compared to these methods, this paper proposes a more general way that can learn a similarity metric from image pixels directly. By using a \"siamese\" deep neural network, the proposed method can jointly learn the color feature, texture feature and metric in a unified framework. The network has a symmetry structure with two sub-networks which are connected by a cosine layer. Each sub network includes two convolutional layers and a full connected layer. To deal with the big variations of person images, binomial deviance is used to evaluate the cost between similarities and labels, which is proved to be robust to outliers. Experiments on VIPeR illustrate the superior performance of our method and a cross database experiment also shows its good generalization.", "", "In this paper, we propose a deep end-to-end neu- ral network to simultaneously learn high-level features and a corresponding similarity metric for person re-identification. The network takes a pair of raw RGB images as input, and outputs a similarity value indicating whether the two input images depict the same person. A layer of computing neighborhood range differences across two input images is employed to capture local relationship between patches. This operation is to seek a robust feature from input images. By increasing the depth to 10 weight layers and using very small (3 @math 3) convolution filters, our architecture achieves a remarkable improvement on the prior-art configurations. Meanwhile, an adaptive Root- Mean-Square (RMSProp) gradient decent algorithm is integrated into our architecture, which is beneficial to deep nets. Our method consistently outperforms state-of-the-art on two large datasets (CUHK03 and Market-1501), and a medium-sized data set (CUHK01).", "In this work, we propose a method for simultaneously learning features and a corresponding similarity metric for person re-identification. We present a deep convolutional architecture with layers specially designed to address the problem of re-identification. Given a pair of images as input, our network outputs a similarity value indicating whether the two input images depict the same person. Novel elements of our architecture include a layer that computes cross-input neighborhood differences, which capture local relationships between the two input images based on mid-level features from each input image. A high-level summary of the outputs of this layer is computed by a layer of patch summary features, which are then spatially integrated in subsequent layers. Our method significantly outperforms the state of the art on both a large data set (CUHK03) and a medium-sized data set (CUHK01), and is resistant to over-fitting. We also demonstrate that by initially training on an unrelated large data set before fine-tuning on a small target data set, our network can achieve results comparable to the state of the art even on a small data set (VIPeR).", "Person re-identification across cameras remains a very challenging problem, especially when there are no overlapping fields of view between cameras. In this paper, we present a novel multi-channel parts-based convolutional neural network (CNN) model under the triplet framework for person re-identification. Specifically, the proposed CNN model consists of multiple channels to jointly learn both the global full-body and local body-parts features of the input persons. The CNN model is trained by an improved triplet loss function that serves to pull the instances of the same person closer, and at the same time push the instances belonging to different persons farther from each other in the learned feature space. Extensive comparative evaluations demonstrate that our proposed method significantly outperforms many state-of-the-art approaches, including both traditional and deep network-based ones, on the challenging i-LIDS, VIPeR, PRID2011 and CUHK01 datasets." ] }
1606.04288
2427461275
This paper presents BDDT-SCC, a task-parallel runtime system for non cache-coherent multicore processors, implemented for the Intel Single-Chip Cloud Computer. The BDDT-SCC runtime includes a dynamic dependence analysis and automatic synchronization, and executes OpenMP-Ss tasks on a non cache-coherent architecture. We design a runtime that uses fast on-chip inter-core communication with small messages. At the same time, we use non coherent shared memory to avoid large core-to-core data transfers that would incur a high volume of unnecessary copying. We evaluate BDDT-SCC on a set of representative benchmarks, in terms of task granularity, locality, and communication. We find that memory locality and allocation plays a very important role in performance, as the architecture of the SCC memory controllers can create strong contention effects. We suggest patterns that improve memory locality and thus the performance of applications, and measure their impact.
The SCC processor relaxes hardware cache-coherence to improve scalability and energy consumption @cite_10 @cite_15 @cite_0 @cite_20 . Early runtimes treat the SCC as a message-passing system @cite_7 @cite_14 , use distributed and cluster languages to program it @cite_13 , or implement software cache-coherence @cite_6 . However, these approaches fail to take advantage of the non-coherent shared memory of the SCC and also the granularity of tasks that does not require coherence traffic for individual loads and stores.
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_7", "@cite_6", "@cite_0", "@cite_15", "@cite_10", "@cite_20" ], "mid": [ "2105804449", "138699628", "2019629740", "1967004358", "", "", "1974420955", "" ], "abstract": [ "The Single-Chip Cloud Computer (SCC) is an experimental processor created by Intel Labs. SCC is essentially a 'cluster-on-a-chip', so X10 with its support for places and remote asynchronous invocations is a natural fit for programming this platform. We report here on our experience porting X10 to the SCC, and show performance and scaling results for representative X10 benchmark applications. We compare results for our extensions to the SCC native messaging primitives in support of the X10 run-time, versus X10 on top of a prototype MPI API for SCC. The native SCC run-time exhibits better performance and scaling than the MPI binding. Scaling depends on the relative cost of computation versus communication in the workload used, since SCC is relatively underpowered for computation but has hardware support for message passing.", "The Single-chip Cloud Computer (SCC) from Intel Labs is an experimental CPU that integrates 48 cores. As its name suggests, it is a distributed memory system on a chip. In typical configurations, the available memory is divided equally across the cores. Message passing is supported by means of an on-die Message Passing Buffer (MPB). The memory organization and hardware features of the SCC make it an interesting platform for evaluating parallel programming models. In this work, an MPI implementation is optimized and extended to support the invasive programming model; the invasive model's main idea is to allow for resource aware programming. The result is a library that provides resource awareness through extensions to MPI, while keeping its features and compatibility.", "Many-core chips are changing the way high-performance computing systems are built and programmed. As it is becoming increasingly difficult to maintain cache coherence across many cores, manufacturers are exploring designs that do not feature any cache coherence between cores. Communications on such chips are naturally implemented using message passing, which makes them resemble clusters, but with an important difference. Special hardware can be provided that supports very fast on-chip communications, reducing latency and increasing bandwidth. We present one such chip, the Single-Chip Cloud Computer (SCC). This is an experimental processor, created by Intel Labs. We describe two communication libraries available on SCC: RCCE and Rckmb. RCCE is a light-weight, minimal library for writing message passing parallel applications. Rckmb provides the data link layer for running network services such as TCP IP. Both utilize SCC's non-cache-coherent shared memory for transferring data between cores without needing to go off-chip. In this paper we describe the design and implementation of RCCE and Rckmb. To compare their performance, we consider simple benchmarks run with RCCE, and MPI over TCP IP.", "The Single-chip Cloud Computer (SCC) is an experimental processor created by Intel Labs. The SCC is based on a message passing architecture and does not provide any hardware cache coherence mechanism. Software or programmers should take care of coherence and consistency of a shared region between different cores. In this paper, we propose an efficient software shared virtual memory (SVM) for the SCC as an alternative to the cache coherence mechanism and report some preliminary results. Our software SVM is based on the commit-reconcile and fence (CRF) memory model and does not require a complicated SVM protocol between cores. We evaluate the effectiveness of our approach by comparing the software SVM with a cache-coherent NUMA machine using three synthetic micro-benchmark applications and five applications from SPLASH-2. Evaluation result indicates that our approach is promising.", "", "", "Current developments in microprocessor design favor increased core counts over frequency scaling to improve processor performance and energy efficiency. Coupling this architectural trend with a message-passing protocol helps realize a data-center-on-a-die. The prototype chip (Figs. 5.7.1 and 5.7.7) described in this paper integrates 48 Pentium™ class IA-32 cores [1] on a 6×4 2D-mesh network of tiled core clusters with high-speed I Os on the periphery. The chip contains 1.3B transistors. Each core has a private 256KB L2 cache (12MB total on-die) and is optimized to support a message-passing-programming model whereby cores communicate through shared memory. A 16KB message-passing buffer (MPB) is present in every tile, giving a total of 384KB on-die shared memory, for increased performance. Power is kept at a minimum by transmitting dynamic, fine-grained voltage-change commands over the network to an on-die voltage-regulator controller (VRC). Further power savings are achieved through active frequency scaling at the tile granularity. Memory accesses are distributed over four on-die DDR3 controllers for an aggregate peak memory bandwidth of 21GB s at 4× burst. Additionally, an 8-byte bidirectional system interface (SIF) provides 6.4GB s of I O bandwidth. The die area is 567mm2 and is implemented in 45nm high-к met al-gate CMOS [2].", "" ] }
1606.04250
2434777396
The amount of digitally available but heterogeneous information about the world is remarkable, and new technologies such as self-driving cars, smart homes, or the internet of things may further increase it. In this paper we present preliminary ideas about certain aspects of the problem of how such heterogeneous information can be harnessed by autonomous agents. After discussing potentials and limitations of some existing approaches, we investigate how can help to obtain a better understanding of the problem. Specifically, we present a simple agent that integrates video data from a different agent, and implement and evaluate a version of it on the novel experimentation platform . The focus of a second investigation is on how information about the hardware of different agents, the agents' sensory data, and information can be utilized for knowledge transfer between agents and subsequently more data-efficient decision making. Finally, we discuss potential future steps w.r.t. theory and experimentation, and formulate open questions.
Another related areas is computerized knowledge representation @cite_3 . Compared to general approaches to knowledge representation, our focus is on knowledge about the world.
{ "cite_N": [ "@cite_3" ], "mid": [ "1516346799" ], "abstract": [ "1. Logic. 2. Ontology. 3. Knowledge Representation. 4. Processes. 5. Purposes, Contexts, And Agents. 6. Knowledge Soup. 7. Knowledge Acquisition And Sharing. Appendixes: Appendix A: Summary Of Notations Appendix B: Sample Ontology. Appendix C: Extended Example. Answers To Selected Exercises. Bibliography. Name Index. Subject Index. Special Symbols." ] }
1606.03986
2426227574
Distributed Denial-of-Service (DDoS) attacks are usually launched through the @math , an "army" of compromised nodes hidden in the network. Inferential tools for DDoS mitigation should accordingly enable an early and reliable discrimination of the normal users from the compromised ones. Unfortunately, the recent emergence of attacks performed at the application layer has multiplied the number of possibilities that a botnet can exploit to conceal its malicious activities. New challenges arise, which cannot be addressed by simply borrowing the tools that have been successfully applied so far to earlier DDoS paradigms. In this work, we offer basically three contributions: @math we introduce an abstract model for the aforementioned class of attacks, where the botnet emulates normal traffic by continually learning admissible patterns from the environment; @math we devise an inference algorithm that is shown to provide a consistent (i.e., converging to the true solution as time progresses) estimate of the botnet possibly hidden in the network; and @math we verify the validity of the proposed inferential strategy over @math network traces.
The literature about DDoS attacks is rich, and we refer the Reader to the survey in @cite_18 as a useful entry-point. The earliest DoS paradigms (see, e.g., TCP SYN flooding), relied on specific protocols' vulnerabilities, and were characterized by the repetition of one (or few) requests with a huge rate. In this situation, the single source of the attack can be identified by simply computing its unusually large request rate.
{ "cite_N": [ "@cite_18" ], "mid": [ "2137345105" ], "abstract": [ "Threats of distributed denial of service (DDoS) attacks have been increasing day-by-day due to rapid development of computer networks and associated infrastructure, and millions of software applications, large and small, addressing all varieties of tasks. Botnets pose a major threat to network security as they are widely used for many Internet crimes such as DDoS attacks, identity theft, email spamming, and click fraud. Botnet based DDoS attacks are catastrophic to the victim network as they can exhaust both network bandwidth and resources of the victim machine. This survey presents a comprehensive overview of DDoS attacks, their causes, types with a taxonomy, and technical details of various attack launching tools. A detailed discussion of several botnet architectures, tools developed using botnet architectures, and pros and cons analysis are also included. Furthermore, a list of important issues and research challenges is also reported." ] }
1606.03986
2426227574
Distributed Denial-of-Service (DDoS) attacks are usually launched through the @math , an "army" of compromised nodes hidden in the network. Inferential tools for DDoS mitigation should accordingly enable an early and reliable discrimination of the normal users from the compromised ones. Unfortunately, the recent emergence of attacks performed at the application layer has multiplied the number of possibilities that a botnet can exploit to conceal its malicious activities. New challenges arise, which cannot be addressed by simply borrowing the tools that have been successfully applied so far to earlier DDoS paradigms. In this work, we offer basically three contributions: @math we introduce an abstract model for the aforementioned class of attacks, where the botnet emulates normal traffic by continually learning admissible patterns from the environment; @math we devise an inference algorithm that is shown to provide a consistent (i.e., converging to the true solution as time progresses) estimate of the botnet possibly hidden in the network; and @math we verify the validity of the proposed inferential strategy over @math network traces.
The distributed variants of such attacks exploit basically the same kind of vulnerabilities and repetition schemes, but for the fact that the large request rate is now obtained by aggregating many small individual bot rates. This notwithstanding, in such attacks, the bots can be still identified at a single-user level. Indeed, normal traffic patterns are typically characterized by a certain degree of innovation, while the repetition scheme implicitly emphasizes the bot character. In fact, several useful inferential strategies have been proposed for such kind of DDoS attacks, see @cite_18 for a comparative summary.
{ "cite_N": [ "@cite_18" ], "mid": [ "2137345105" ], "abstract": [ "Threats of distributed denial of service (DDoS) attacks have been increasing day-by-day due to rapid development of computer networks and associated infrastructure, and millions of software applications, large and small, addressing all varieties of tasks. Botnets pose a major threat to network security as they are widely used for many Internet crimes such as DDoS attacks, identity theft, email spamming, and click fraud. Botnet based DDoS attacks are catastrophic to the victim network as they can exhaust both network bandwidth and resources of the victim machine. This survey presents a comprehensive overview of DDoS attacks, their causes, types with a taxonomy, and technical details of various attack launching tools. A detailed discussion of several botnet architectures, tools developed using botnet architectures, and pros and cons analysis are also included. Furthermore, a list of important issues and research challenges is also reported." ] }
1606.03968
2607935738
We describe a system to detect objects in three-dimensional space using video and inertial sensors (accelerometer and gyrometer), ubiquitous in modern mobile platforms from phones to drones. Inertials afford the ability to impose class-specific scale priors for objects, and provide a global orientation reference. A minimal sufficient representation, the posterior of semantic (identity) and syntactic (pose) attributes of objects in space, can be decomposed into a geometric term, which can be maintained by a localization-and-mapping filter, and a likelihood function, which can be approximated by a discriminatively-trained convolutional neural network. The resulting system can process the video stream causally in real time, and provides a representation of objects in the scene that is persistent: Confidence in the presence of objects grows with evidence, and objects previously seen are kept in memory even when temporarily occluded, with their return into view automatically predicted to prime re-detection.
This work, by its nature, relates to a vast body of literature on scene understanding in Computer Vision, Robotics @cite_30 @cite_49 and AI @cite_13 dating back decades @cite_45 . Most recently, with the advent of cheap consumer range sensors, there has been a wealth of activity in this area @cite_14 @cite_25 @cite_71 @cite_46 @cite_4 @cite_32 @cite_43 @cite_64 @cite_35 @cite_31 @cite_56 @cite_19 @cite_57 @cite_75 @cite_67 @cite_59 . The use of RGB-D cameras unfortunately restricts the domain of applicability mostly indoors and at close range whereas we target mobility applications where the camera, which typically has an inertial sensor strapped on it, but not (yet) a range sensor, can be used both indoor and outdoors. We expect that, on indoor sequences, our method would underperform a structured light or other RGB-D source, but this is subject of future investigation.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_64", "@cite_43", "@cite_71", "@cite_75", "@cite_67", "@cite_4", "@cite_49", "@cite_46", "@cite_32", "@cite_56", "@cite_57", "@cite_19", "@cite_25", "@cite_14", "@cite_45", "@cite_59", "@cite_31", "@cite_13" ], "mid": [ "2075274454", "2097696373", "2049776679", "2067912884", "2295647632", "", "2949768986", "2154083146", "2963288928", "1964420786", "2221101993", "1919033285", "2167687475", "", "2109081464", "2152571752", "", "2463032559", "2033979122", "2127911825" ], "abstract": [ "We propose a view-based approach for labeling objects in 3D scenes reconstructed from RGB-D (color+depth) videos. We utilize sliding window detectors trained from object views to assign class probabilities to pixels in every RGB-D frame. These probabilities are projected into the reconstructed 3D scene and integrated using a voxel representation. We perform efficient inference on a Markov Random Field over the voxels, combining cues from view-based detection and 3D shape, to label the scene. Our detection-based approach produces accurate scene labeling on the RGB-D Scenes Dataset and improves the robustness of object detection.", "We present the major advantages of a new 'object oriented' 3D SLAM paradigm, which takes full advantage in the loop of prior knowledge that many scenes consist of repeated, domain-specific objects and structures. As a hand-held depth camera browses a cluttered scene, real-time 3D object recognition and tracking provides 6DoF camera-object constraints which feed into an explicit graph of objects, continually refined by efficient pose-graph optimisation. This offers the descriptive and predictive power of SLAM systems which perform dense surface reconstruction, but with a huge representation compression. The object graph enables predictions for accurate ICP-based camera to model tracking at each live frame, and efficient active search for new objects in currently undescribed image regions. We demonstrate real-time incremental SLAM in large, cluttered environments, including loop closure, relocalisation and the detection of moved objects, and of course the generation of an object level scene description with the potential to enable interaction.", "We present a method for discovering object models from 3D meshes of indoor environments. Our algorithm first decomposes the scene into a set of candidate mesh segments and then ranks each segment according to its “objectness” - a quality that distinguishes objects from clutter. To do so, we propose five intrinsic shape measures: compactness, symmetry, smoothness, and local and global convexity. We additionally propose a recurrence measure, codifying the intuition that frequently occurring geometries are more likely to correspond to complete objects. We evaluate our method in both supervised and unsupervised regimes on a dataset of 58 indoor scenes collected using an Open Source implementation of Kinect Fusion [1]. We show that our approach can reliably and efficiently distinguish objects from clutter, with Average Precision score of .92. We make our dataset available to the public.", "We address the problems of contour detection, bottom-up grouping and semantic segmentation using RGB-D data. We focus on the challenging setting of cluttered indoor scenes, and evaluate our approach on the recently introduced NYU-Depth V2 (NYUD2) dataset [27]. We propose algorithms for object boundary detection and hierarchical segmentation that generalize the gPb-ucm approach of [2] by making effective use of depth information. We show that our system can label each contour with its type (depth, normal or albedo). We also propose a generic method for long-range amodal completion of surfaces and show its effectiveness in grouping. We then turn to the problem of semantic segmentation and propose a simple approach that classifies super pixels into the 40 dominant object categories in NYUD2. We use both generic and class-specific features to encode the appearance and geometry of objects. We also show how our approach can be used for scene classification, and how this contextual information in turn improves object recognition. In all of these tasks, we report significant improvements over the state-of-the-art.", "Semantic labeling of RGB-D scenes is very important in enabling robots to perform mobile manipulation tasks, but different tasks may require entirely different sets of labels. For example, when navigating to an object, we may need only a single label denoting its class, but to manipulate it, we might need to identify individual parts. In this work, we present an algorithm that produces hierarchical labelings of a scene, following is-part-of and is-type-of relationships. Our model is based on a Conditional Random Field that relates pixel-wise and pair-wise observations to labels. We encode hierarchical labeling constraints into the model while keeping inference tractable. Our model thus predicts different specificities in labeling based on its confidence—if it is not sure whether an object is Pepsi or Sprite, it will predict soda rather than making an arbitrary choice. In extensive experiments, both offline on standard datasets as well as in online robotic experiments, we show that our model outperforms other stateof-the-art methods in labeling performance as well as in success rate for robotic tasks.", "", "We focus on the task of amodal 3D object detection in RGB-D images, which aims to produce a 3D bounding box of an object in metric form at its full extent. We introduce Deep Sliding Shapes, a 3D ConvNet formulation that takes a 3D volumetric scene from a RGB-D image as input and outputs 3D object bounding boxes. In our approach, we propose the first 3D Region Proposal Network (RPN) to learn objectness from geometric shapes and the first joint Object Recognition Network (ORN) to extract geometric features in 3D and color features in 2D. In particular, we handle objects of various sizes by training an amodal RPN at two different scales and an ORN to regress 3D bounding boxes. Experiments show that our algorithm outperforms the state-of-the-art by 13.8 in mAP and is 200x faster than the original Sliding Shapes. All source code and pre-trained models will be available at GitHub.", "This paper presents a nonparametric approach to semantic parsing using small patches and simple gradient, color and location features. We learn the relevance of individual feature channels at test time using a locally adaptive distance metric. To further improve the accuracy of the nonparametric approach, we examine the importance of the retrieval set used to compute the nearest neighbours using a novel semantic descriptor to retrieve better candidates. The approach is validated by experiments on several datasets used for semantic parsing demonstrating the superiority of the method compared to the state of art approaches.", "", "The semantic mapping of the environment requires simultaneous segmentation and categorization of the acquired stream of sensory information. The existing methods typically consider the semantic mapping as the final goal and differ in the number and types of considered semantic categories. We envision semantic understanding of the environment as an on-going process and seek representations which can be refined and adapted depending on the task and robot's interaction with the environment. In this work we propose a novel and efficient method for semantic parsing, which can be adapted to the task at hand and enables localization of objects of interest in indoor environments. For basic mobility tasks we demonstrate how to obtain initial semantic segmentation of the scene into ground, structure, furniture and props categories which constitute the first level of hierarchy. Then, we propose a simple and efficient method for predicting locations of objects that based on their size afford a manipulation task. In our experiments we use the publicly available NYU V2 dataset and obtain better or comparable results than the state of the art at a fraction of the computational cost. We show the generalization of our approach on two more publicly available datasets.", "In this paper, we address the problem of semantic scene segmentation of RGB-D images of indoor scenes. We propose a novel image region labeling method which augments CRF formulation with hard mutual exclusion (mutex) constraints. This way our approach can make use of rich and accurate 3D geometric structure coming from Kinect in a principled manner. The final labeling result must satisfy all mutex constraints, which allows us to eliminate configurations that violate common sense physics laws like placing a floor above a night stand. Three classes of mutex constraints are proposed: global object co-occurrence constraint, relative height relationship constraint, and local support relationship constraint. We evaluate our approach on the NYU-Depth V2 dataset, which consists of 1449 cluttered indoor scenes, and also test generalization of our model trained on NYU-Depth V2 dataset directly on a recent SUN3D dataset without any new training. The experimental results show that we significantly outperform the state-of-the-art methods in scene labeling on both datasets.", "We focus on the problem of semantic segmentation based on RGB-D data, with emphasis on analyzing cluttered indoor scenes containing many visual categories and instances. Our approach is based on a parametric figureground intensity and depth-constrained proposal process that generates spatial layout hypotheses at multiple locations and scales in the image followed by a sequential inference algorithm that produces a complete scene estimate. Our contributions can be summarized as follows: (1) a generalization of parametric max flow figure-ground proposal methodology to take advantage of intensity and depth information, in order to systematically and efficiently generate the breakpoints of an underlying spatial model in polynomial time, (2) new region description methods based on second-order pooling over multiple features constructed using both intensity and depth channels, (3) a principled search-based structured prediction inference and learning process that resolves conflicts in overlapping spatial partitions and selects regions sequentially towards complete scene estimates, and (4) extensive evaluation of the impact of depth, as well as the effectiveness of a large number of descriptors, both pre-designed and automatically obtained using deep learning, in a difficult RGB-D semantic segmentation problem with 92 classes. We report state of the art results in the challenging NYU Depth Dataset V2 [44], extended for the RMRC 2013 and RMRC 2014 Indoor Segmentation Challenges, where currently the proposed model ranks first. Moreover, we show that by combining second-order and deep learning features, over 15 relative accuracy improvements can be additionally achieved. In a scene classification benchmark, our methodology further improves the state of the art by 24 .", "Our abilities in scene understanding, which allow us to perceive the 3D structure of our surroundings and intuitively recognise the objects we see, are things that we largely take for granted, but for robots, the task of understanding large scenes quickly remains extremely challenging. Recently, scene understanding approaches based on 3D reconstruction and semantic segmentation have become popular, but existing methods either do not scale, fail outdoors, provide only sparse reconstructions or are rather slow. In this paper, we build on a recent hash-based technique for large-scale fusion and an efficient mean-field inference algorithm for densely-connected CRFs to present what to our knowledge is the first system that can perform dense, large-scale, outdoor semantic reconstruction of a scene in (near) real time. We also present a ‘semantic fusion’ approach that allows us to handle dynamic objects more effectively than previous approaches. We demonstrate the effectiveness of our approach on the KITTI dataset, and provide qualitative and quantitative results showing high-quality dense reconstruction and labelling of a number of scenes.", "", "We address the problem of object detection and segmentation using global holistic properties of object shape. Global shape representations are highly susceptible to clutter inevitably present in realistic images, and thus can be applied robustly only using a precise segmentation of the object. To this end, we propose a figure ground segmentation method for extraction of image regions that resemble the global properties of a model boundary structure and are perceptually salient. Our shape representation, called the chordiogram, is based on geometric relationships of object boundary edges, while the perceptual saliency cues we use favor coherent regions distinct from the background. We formulate the segmentation problem as an integer quadratic program and use a semidefinite programming relaxation to solve it. The obtained solutions provide a segmentation of the object as well as a detection score used for object recognition. Our single-step approach achieves state-of-the-art performance on several object detection and segmentation benchmarks.", "In this paper, we tackle the problem of indoor scene understanding using RGBD data. Towards this goal, we propose a holistic approach that exploits 2D segmentation, 3D geometry, as well as contextual relations between scenes and objects. Specifically, we extend the CPMC [3] framework to 3D in order to generate candidate cuboids, and develop a conditional random field to integrate information from different sources to classify the cuboids. With this formulation, scene classification and 3D object recognition are coupled and can be jointly solved through probabilistic inference. We test the effectiveness of our approach on the challenging NYU v2 dataset. The experimental results demonstrate that through effective evidence integration and holistic reasoning, our approach achieves substantial improvement over the state-of-the-art.", "", "We develop new representations and algorithms for three-dimensional (3D) object detection and spatial layout prediction in cluttered indoor scenes. RGB-D images are traditionally described by local geometric features of the 3D point cloud. We propose a cloud of oriented gradient (COG) descriptor that links the 2D appearance and 3D pose of object categories, and thus accurately models how perspective projection affects perceived image boundaries. We also propose a \"Manhattan voxel\" representation which better captures the 3D room layout geometry of common indoor environments. Effective classification rules are learned via a structured prediction framework that accounts for the intersection-over-union overlap of hypothesized 3D cuboids with human annotations, as well as orientation estimation errors. Contextual relationships among categories and layout are captured via a cascade of classifiers, leading to holistic scene hypotheses with improved accuracy. Our model is learned solely from annotated RGB-D images, without the benefit of CAD models, but nevertheless its performance substantially exceeds the state-of-the-art on the SUN RGB-D database. Avoiding CAD models allows easier learning of detectors for many object categories.", "Dense semantic segmentation of 3D point clouds is a challenging task. Many approaches deal with 2D semantic segmentation and can obtain impressive results. With the availability of cheap RGB-D sensors the field of indoor semantic segmentation has seen a lot of progress. Still it remains unclear how to deal with 3D semantic segmentation in the best way. We propose a novel 2D-3D label transfer based on Bayesian updates and dense pairwise 3D Conditional Random Fields. This approach allows us to use 2D semantic segmentations to create a consistent 3D semantic reconstruction of indoor scenes. To this end, we also propose a fast 2D semantic segmentation approach based on Randomized Decision Forests. Furthermore, we show that it is not needed to obtain a semantic segmentation for every frame in a sequence in order to create accurate semantic 3D reconstructions. We evaluate our approach on both NYU Depth datasets and show that we can obtain a significant speed-up compared to other methods.", "Inexpensive RGB-D cameras that give an RGB image together with depth data have become widely available. In this paper, we use this data to build 3D point clouds of full indoor scenes such as an office and address the task of semantic labeling of these 3D point clouds. We propose a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurence relationships and geometric relationships. With a large number of object classes and relations, the model's parsimony becomes important and we address that by using multiple types of edge potentials. The model admits efficient approximate inference, and we train it using a maximum-margin learning approach. In our experiments over a total of 52 3D scenes of homes and offices (composed from about 550 views, having 2495 segments labeled with 27 object classes), we get a performance of 84.06 in labeling 17 object classes for offices, and 73.38 in labeling 17 object classes for home scenes. Finally, we applied these algorithms successfully on a mobile robot for the task of finding objects in large cluttered rooms." ] }
1606.03968
2607935738
We describe a system to detect objects in three-dimensional space using video and inertial sensors (accelerometer and gyrometer), ubiquitous in modern mobile platforms from phones to drones. Inertials afford the ability to impose class-specific scale priors for objects, and provide a global orientation reference. A minimal sufficient representation, the posterior of semantic (identity) and syntactic (pose) attributes of objects in space, can be decomposed into a geometric term, which can be maintained by a localization-and-mapping filter, and a likelihood function, which can be approximated by a discriminatively-trained convolutional neural network. The resulting system can process the video stream causally in real time, and provides a representation of objects in the scene that is persistent: Confidence in the presence of objects grows with evidence, and objects previously seen are kept in memory even when temporarily occluded, with their return into view automatically predicted to prime re-detection.
There is also work that focuses on scene understanding from visual sensors, specifically video @cite_52 @cite_50 @cite_20 @cite_17 @cite_55 @cite_36 , although none integrates inertial data, despite a resurgent interest in sensor fusion @cite_9 . Additional related work includes @cite_44 @cite_26 @cite_37 @cite_54 .
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_36", "@cite_55", "@cite_9", "@cite_54", "@cite_52", "@cite_44", "@cite_50", "@cite_20", "@cite_17" ], "mid": [ "1913356549", "2151992422", "2137881638", "1585414673", "1577575150", "1930723014", "801273237", "2150134683", "2184048809", "2951732414", "125693051" ], "abstract": [ "We propose an algorithm for semantic segmentation based on 3D point clouds derived from ego-motion. We motivate five simple cues designed to model specific patterns of motion and 3D world structure that vary with object category. We introduce features that project the 3D cues back to the 2D image plane while modeling spatial layout and context. A randomized decision forest combines many such features to achieve a coherent 2D segmentation and recognize the object categories present. Our main contribution is to show how semantic segmentation is possible based solely on motion-derived 3D world structure. Our method works well on sparse, noisy point clouds, and unlike existing approaches, does not need appearance-based descriptors. Experiments were performed on a challenging new video database containing sequences filmed from a moving car in daylight and at dusk. The results confirm that indeed, accurate segmentation and recognition are possible using only motion and 3D world structure. Further, we show that the motion-derived information complements an existing state-of-the-art appearance-based method, improving both qualitative and quantitative performance.", "Supplying realistically textured 3D city models at ground level promises to be useful for pre-visualizing upcoming traffic situations in car navigation systems. Because this pre-visualization can be rendered from the expected future viewpoints of the driver, the required maneuver will be more easily understandable. 3D city models can be reconstructed from the imagery recorded by surveying vehicles. The vastness of image material gathered by these vehicles, however, puts extreme demands on vision algorithms to ensure their practical usability. Algorithms need to be as fast as possible and should result in compact, memory efficient 3D city models for future ease of distribution and visualization. For the considered application, these are not contradictory demands. Simplified geometry assumptions can speed up vision algorithms while automatically guaranteeing compact geometry models. In this paper, we present a novel city modeling framework which builds upon this philosophy to create 3D content at high speed. Objects in the environment, such as cars and pedestrians, may however disturb the reconstruction, as they violate the simplified geometry assumptions, leading to visually unpleasant artifacts and degrading the visual realism of the resulting 3D city model. Unfortunately, such objects are prevalent in urban scenes. We therefore extend the reconstruction framework by integrating it with an object recognition module that automatically detects cars in the input video streams and localizes them in 3D. The two components of our system are tightly integrated and benefit from each other's continuous input. 3D reconstruction delivers geometric scene context, which greatly helps improve detection precision. The detected car locations, on the other hand, are used to instantiate virtual placeholder models which augment the visual realism of the reconstructed city model.", "In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.", "Scene detection is a fundamental tool for allowing effective video browsing and re-using. In this paper we present a model that automatically divides videos into coherent scenes, which is based on a novel combination of local image descriptors and temporal clustering techniques. Experiments are performed to demonstrate the effectiveness of our approach, by comparing our algorithm against two recent proposals for automatic scene segmentation. We also propose improved performance measures that aim to reduce the gap between numerical evaluation and expected results.", "Semantic understanding of environments is an important problem in robotics in general and intelligent autonomous systems in particular. In this paper, we propose a semantic segmentation algorithm which effectively fuses information from images and 3D point clouds. The proposed method incorporates information from multiple scales in an intuitive and effective manner. A late-fusion architecture is proposed to maximally leverage the training data in each modality. Finally, a pairwise Conditional Random Field (CRF) is used as a post-processing step to enforce spatial consistency in the structured prediction. The proposed algorithm is evaluated on the publicly available KITTI dataset [1] [2], augmented with additional pixel and point-wise semantic labels for building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign pole, and fence regions. A per-pixel accuracy of 89.3 and average class accuracy of 65.4 is achieved, well above current state-of-the-art [3].", "Dense semantic 3D reconstruction is typically formulated as a discrete or continuous problem over label assignments in a voxel grid, combining semantic and depth likelihoods in a Markov Random Field framework. The depth and semantic information is incorporated as a unary potential, smoothed by a pairwise regularizer. However, modelling likelihoods as a unary potential does not model the problem correctly leading to various undesirable visibility artifacts. We propose to formulate an optimization problem that directly optimizes the reprojection error of the 3D model with respect to the image estimates, which corresponds to the optimization over rays, where the cost function depends on the semantic class and depth of the first occupied voxel along the ray. The 2-label formulation is made feasible by transforming it into a graph-representable form under QPBO relaxation, solvable using graph cut. The multi-label problem is solved by applying α-expansion using the same relaxation in each expansion move. Our method was indeed shown to be feasible in practice, running comparably fast to the competing methods, while not suffering from ray potential approximation artifacts.", "We present an approach for joint inference of 3D scene structure and semantic labeling for monocular video. Starting with monocular image stream, our framework produces a 3D volumetric semantic + occupancy map, which is much more useful than a series of 2D semantic label images or a sparse point cloud produced by traditional semantic segmentation and Structure from Motion(SfM) pipelines respectively. We derive a Conditional Random Field (CRF) model defined in the 3D space, that jointly infers the semantic category and occupancy for each voxel. Such a joint inference in the 3D CRF paves the way for more informed priors and constraints, which is otherwise not possible if solved separately in their traditional frameworks. We make use of class specific semantic cues that constrain the 3D structure in areas, where multiview constraints are weak. Our model comprises of higher order factors, which helps when the depth is unobservable.We also make use of class specific semantic cues to reduce either the degree of such higher order factors, or to approximately model them with unaries if possible. We demonstrate improved 3D structure and temporally consistent semantic segmentation for difficult, large scale, forward moving monocular image sequences.", "Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being 'too noisy'. Unfortunately, these priors generally yield overly smooth reconstructions and or segmentations in certain regions whereas they fail in other areas to constrain the solution sufficiently. In this paper we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other's task. As a consequence, we propose a rigorous mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. Image segmentations provide geometric cues about which surface orientations are more likely to appear at a certain location in space whereas a dense 3D reconstruction yields a suitable regularization for the segmentation problem by lifting the labeling from 2D images to 3D space. We show how appearance-based cues and 3D surface orientation priors can be learned from training data and subsequently used for class-specific regularization. Experimental results on several real data sets highlight the advantages of our joint formulation.", "In this paper we explore the use of visual commonsense knowledge and other kinds of knowledge (such as domain knowledge, background knowledge, linguistic knowledge) for scene understanding. In particular, we combine visual processing with techniques from natural language understanding (especially semantic parsing), common-sense reasoning and knowledge representation and reasoning to improve visual perception to reason about finer aspects of activities.", "State-of-the-art semantic image segmentation methods are mostly based on training deep convolutional neural networks (CNNs). In this work, we proffer to improve semantic segmentation with the use of contextual information. In particular, we explore patch-patch' context and patch-background' context in deep CNNs. We formulate deep structured models by combining CNNs and Conditional Random Fields (CRFs) for learning the patch-patch context between image regions. Specifically, we formulate CNN-based pairwise potential functions to capture semantic correlations between neighboring patches. Efficient piecewise training of the proposed deep structured model is then applied in order to avoid repeated expensive CRF inference during the course of back propagation. For capturing the patch-background context, we show that a network design with traditional multi-scale image inputs and sliding pyramid pooling is very effective for improving performance. We perform comprehensive evaluation of the proposed method. We achieve new state-of-the-art performance on a number of challenging semantic segmentation datasets including @math , @math - @math , @math , @math - @math , @math - @math , @math - @math , and @math datasets. Particularly, we report an intersection-over-union score of @math on the @math - @math dataset.", "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation." ] }
1606.03968
2607935738
We describe a system to detect objects in three-dimensional space using video and inertial sensors (accelerometer and gyrometer), ubiquitous in modern mobile platforms from phones to drones. Inertials afford the ability to impose class-specific scale priors for objects, and provide a global orientation reference. A minimal sufficient representation, the posterior of semantic (identity) and syntactic (pose) attributes of objects in space, can be decomposed into a geometric term, which can be maintained by a localization-and-mapping filter, and a likelihood function, which can be approximated by a discriminatively-trained convolutional neural network. The resulting system can process the video stream causally in real time, and provides a representation of objects in the scene that is persistent: Confidence in the presence of objects grows with evidence, and objects previously seen are kept in memory even when temporarily occluded, with their return into view automatically predicted to prime re-detection.
Semantic scene understanding from a single image is also an area of research ( @cite_2 and references therein). We are instead interested in agents embedded in physical space, for which the restriction to a single image is limiting. There is also a vast literature on scene segmentation ( @cite_60 and references therein), mostly using range (RGB-D) sensors. One popular pipeline for dense semantic segmentation is adopted by @cite_31 @cite_66 @cite_57 @cite_12 @cite_10 : Depth maps obtained either from RGB-D or stereo are fused; 2D semantic labeling is transferred to 3D and smoothed with a fully-connected CRF @cite_28 . Also related methods on joint semantic segmentation and reconstruction are @cite_21 @cite_76 @cite_68 .
{ "cite_N": [ "@cite_60", "@cite_28", "@cite_21", "@cite_57", "@cite_68", "@cite_2", "@cite_31", "@cite_76", "@cite_10", "@cite_66", "@cite_12" ], "mid": [ "", "2161236525", "2337062560", "2167687475", "2460551350", "2259631822", "2033979122", "2462462929", "2417235025", "2951884519", "" ], "abstract": [ "", "Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.", "We propose an approach for dense semantic 3D reconstruction which uses a data term that is defined as potentials over viewing rays, combined with continuous surface area penalization. Our formulation is a convex relaxation which we augment with a crucial non-convex constraint that ensures exact handling of visibility. To tackle the non-convex minimization problem, we propose a majorize-minimize type strategy which converges to a critical point. We demonstrate the benefits of using the non-convex constraint experimentally. For the geometry-only case, we set a new state of the art on two datasets of the commonly used Middlebury multi-view stereo benchmark. Moreover, our general-purpose formulation directly reconstructs thin objects, which are usually treated with specialized algorithms. A qualitative evaluation on the dense semantic 3D reconstruction task shows that we improve significantly over previous methods.", "Our abilities in scene understanding, which allow us to perceive the 3D structure of our surroundings and intuitively recognise the objects we see, are things that we largely take for granted, but for robots, the task of understanding large scenes quickly remains extremely challenging. Recently, scene understanding approaches based on 3D reconstruction and semantic segmentation have become popular, but existing methods either do not scale, fail outdoors, provide only sparse reconstructions or are rather slow. In this paper, we build on a recent hash-based technique for large-scale fusion and an efficient mean-field inference algorithm for densely-connected CRFs to present what to our knowledge is the first system that can perform dense, large-scale, outdoor semantic reconstruction of a scene in (near) real time. We also present a ‘semantic fusion’ approach that allows us to handle dynamic objects more effectively than previous approaches. We demonstrate the effectiveness of our approach on the KITTI dataset, and provide qualitative and quantitative results showing high-quality dense reconstruction and labelling of a number of scenes.", "We propose an adaptive multi-resolution formulation of semantic 3D reconstruction. Given a set of images of a scene, semantic 3D reconstruction aims to densely reconstruct both the 3D shape of the scene and a segmentation into semantic object classes. Jointly reasoning about shape and class allows one to take into account class-specific shape priors (e.g., building walls should be smooth and vertical, and vice versa smooth, vertical surfaces are likely to be building walls), leading to improved reconstruction results. So far, semantic 3D reconstruction methods have been limited to small scenes and low resolution, because of their large memory footprint and computational cost. To scale them up to large scenes, we propose a hierarchical scheme which refines the reconstruction only in regions that are likely to contain a surface, exploiting the fact that both high spatial resolution and high numerical precision are only required in those regions. Our scheme amounts to solving a sequence of convex optimizations while progressively removing constraints, in such a way that the energy, in each iteration, is the tightest possible approximation of the underlying energy at full resolution. In our experiments the method saves up to 98 memory and 95 computation time, without any loss of accuracy.", "Do we really need 3D labels in order to learn how to predict 3D? In this paper, we show that one can learn a mapping from appearance to 3D properties without ever seeing a single explicit 3D label. Rather than use explicit supervision, we use the regularity of indoor scenes to learn the mapping in a completely unsupervised manner. We demonstrate this on both a standard 3D scene understanding dataset as well as Internet images for which 3D is unavailable, precluding supervised learning. Despite never seeing a 3D label, our method produces competitive results.", "Dense semantic segmentation of 3D point clouds is a challenging task. Many approaches deal with 2D semantic segmentation and can obtain impressive results. With the availability of cheap RGB-D sensors the field of indoor semantic segmentation has seen a lot of progress. Still it remains unclear how to deal with 3D semantic segmentation in the best way. We propose a novel 2D-3D label transfer based on Bayesian updates and dense pairwise 3D Conditional Random Fields. This approach allows us to use 2D semantic segmentations to create a consistent 3D semantic reconstruction of indoor scenes. To this end, we also propose a fast 2D semantic segmentation approach based on Randomized Decision Forests. Furthermore, we show that it is not needed to obtain a semantic segmentation for every frame in a sequence in order to create accurate semantic 3D reconstructions. We evaluate our approach on both NYU Depth datasets and show that we can obtain a significant speed-up compared to other methods.", "In this paper, we propose a non-local structured prior for volumetric multi-view 3D reconstruction. Towards this goal, we present a novel Markov random field model based on ray potentials in which assumptions about large 3D surface patches such as planarity or Manhattan world constraints can be efficiently encoded as probabilistic priors. We further derive an inference algorithm that reasons jointly about voxels, pixels and image segments, and estimates marginal distributions of appearance, occupancy, depth, normals and planarity. Key to tractable inference is a novel hybrid representation that spans both voxel and pixel space and that integrates non-local information from 2D image segmentations in a principled way. We compare our non-local prior to commonly employed local smoothness assumptions and a variety of state-of-the-art volumetric reconstruction baselines on challenging outdoor scenes with textureless and reflective surfaces. Our experiments indicate that regularizing over larger distances has the potential to resolve ambiguities where local regularizers fail.", "This paper presents an efficient system for simultaneous dense scene reconstruction and object labeling in real-world environments (captured with an RGB-D sensor). The proposed system starts with the generation of object proposals in the scene. It then tracks spatio-temporally consistent object proposals across multiple frames and produces a dense reconstruction of the scene. In parallel, the proposed system uses an efficient inference algorithm, where object class probabilities are computed at an object-level and fused into a voxel-based prediction hypothesis modeled on the voxels of the reconstructed scene. Our extensive experiments using challenging RGB-D object and scene datasets, and live video streams from Microsoft Kinect show that the proposed system achieved competitive 3D scene reconstruction and object labeling results compared to the state-of-the-art methods.", "Ever more robust, accurate and detailed mapping using visual sensing has proven to be an enabling factor for mobile robots across a wide variety of applications. For the next level of robot intelligence and intuitive user interaction, maps need extend beyond geometry and appearence - they need to contain semantics. We address this challenge by combining Convolutional Neural Networks (CNNs) and a state of the art dense Simultaneous Localisation and Mapping (SLAM) system, ElasticFusion, which provides long-term dense correspondence between frames of indoor RGB-D video even during loopy scanning trajectories. These correspondences allow the CNN's semantic predictions from multiple view points to be probabilistically fused into a map. This not only produces a useful semantic 3D map, but we also show on the NYUv2 dataset that fusing multiple predictions leads to an improvement even in the 2D semantic labelling over baseline single frame predictions. We also show that for a smaller reconstruction dataset with larger variation in prediction viewpoint, the improvement over single frame segmentation increases. Our system is efficient enough to allow real-time interactive use at frame-rates of approximately 25Hz.", "" ] }
1606.03968
2607935738
We describe a system to detect objects in three-dimensional space using video and inertial sensors (accelerometer and gyrometer), ubiquitous in modern mobile platforms from phones to drones. Inertials afford the ability to impose class-specific scale priors for objects, and provide a global orientation reference. A minimal sufficient representation, the posterior of semantic (identity) and syntactic (pose) attributes of objects in space, can be decomposed into a geometric term, which can be maintained by a localization-and-mapping filter, and a likelihood function, which can be approximated by a discriminatively-trained convolutional neural network. The resulting system can process the video stream causally in real time, and provides a representation of objects in the scene that is persistent: Confidence in the presence of objects grows with evidence, and objects previously seen are kept in memory even when temporarily occluded, with their return into view automatically predicted to prime re-detection.
Recent work in data association @cite_23 aims to directly infer the association map, which is computationally prohibitive for the scale needed in our real-time system. We therefore resort to heuristics, described in Sect. . More specifically to our implementation, we leverage existing visual-inertial filters @cite_5 @cite_18 @cite_69 and single image-trained CNNs @cite_7 @cite_63 @cite_0 .
{ "cite_N": [ "@cite_18", "@cite_69", "@cite_7", "@cite_0", "@cite_23", "@cite_63", "@cite_5" ], "mid": [ "2077751272", "", "2102605133", "2950703487", "2526674647", "", "116077224" ], "abstract": [ "When fusing visual and inertial measurements for motion estimation, each measurement's sampling time must be precisely known. This requires knowledge of the time offset that inevitably exists between the two sensors' data streams. The first contribution of this work is an online approach for estimating this time offset, by treating it as an additional state variable to be estimated along with all other variables of interest inertial measurement unit IMU pose and velocity, biases, camera-to-IMU transformation, feature positions. We show that this approach can be employed in pose-tracking with mapped features, in simultaneous localization and mapping, and in visual-inertial odometry. The second main contribution of this paper is an analysis of the identifiability of the time offset between the visual and inertial sensors. We show that the offset is locally identifiable, except in a small number of degenerate motion cases, which we characterize in detail. These degenerate cases are either i cases known to cause loss of observability even when no time offset exists, or ii cases that are unlikely to occur in practice. Our simulation and experimental results validate these theoretical findings, and demonstrate that the proposed approach yields high-precision, consistent estimates, in scenarios involving either known or unknown features, with both constant and time-varying offsets.", "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "In CNN-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state-of-the-art performance on both detection and pose estimation on commonly used benchmarks.", "Data association is one of the fundamental problems in multi-sensor systems. Most current techniques rely on pairwise data associations which can be spurious even after the employment of outlier rejection schemes. Considering multiple pairwise associations at once significantly increases accuracy and leads to consistency. In this work, we propose two fully decentralized methods for consistent global data association from pairwise data associations. The first method is a consensus algorithm on the set of doubly stochastic matrices. The second method is a decentralization of the spectral method proposed by . We demonstrate the effectiveness of both methods using theoretical analysis and experimental evaluation.", "", "In this paper, we study estimator inconsistency in Vision-aided Inertial Navigation Systems (VINS) from a standpoint of system observability. We postulate that a leading cause of inconsistency is the gain of spurious information along unobservable directions, resulting in smaller uncertainties, larger estimation errors, and possibly even divergence.We develop an Observability-Constrained VINS (OC-VINS), which explicitly enforces the unobservable directions of the system, hence preventing spurious information gain and reducing inconsistency. Our analysis, along with the proposed method for reducing inconsistency, are extensively validated with simulation trials and real-world experiments." ] }
1606.04223
2440825761
Most Information Retrieval models compute the relevance score of a document for a given query by summing term weights specific to a document or a query. Heuristic approaches, like TF-IDF, or probabilistic models, like BM25, are used to specify how a term weight is computed. In this paper, we propose to leverage learning-to-rank principles to learn how to compute a term weight for a given document based on the term occurrence pattern.
Rather than relying on a pre-established term weighting function whose hyperparameters are learned, it can be interesting to learn the term weighting function directly. This has been explored by using genetic algorithms to evolve term weighting functions (represented as trees) @cite_24 , using terminals like term frequency and document frequency, and non-terminals (functions) like sum, product and logarithm. Closer to our work, @cite_25 proposed to use a multi-layer neural network to learn directly the term weight given features like term frequency and document frequency. In this work, we go further, and instead of using pre-defined features, we propose to estimate the term weight directly from the occurrences of a given term in the document and its context (the document collection).
{ "cite_N": [ "@cite_24", "@cite_25" ], "mid": [ "2149944818", "2165613971" ], "abstract": [ "This paper describes a method, using Genetic Programming, to automatically determine term weighting schemes for the vector space model. Based on a set of queries and their human determined relevant documents, weighting schemes are evolved which achieve a high average precision. In Information Retrieval (IR) systems, useful information for term weighting schemes is available from the query, individual documents and the collection as a whole. We evolve term weighting schemes in both local (within-document) and global (collection-wide) domains which interact with each other correctly to achieve a high average precision. These weighting schemes are tested on well-known test collections and are compared to the traditional tf-idf weighting scheme and to the BM25 weighting scheme using standard IR performance metrics. Furthermore, we show that the global weighting schemes evolved on small collections also increase average precision on larger TREC data. These global weighting schemes are shown to adhere to Luhn's resolving power as both high and low frequency terms are assigned low weights. However, the local weightings evolved on small collections do not perform as well on large collections. We conclude that in order to evolve improved local (within-document) weighting schemes it is necessary to evolve these on large collections.", "Despite the widespread use of BM25, there have been few studies examining its effectiveness on a document description over single and multiple field combinations. We determine the effectiveness of BM25 on various document fields. We find that BM25 models relevance on popularity fields such as anchor text and query click information no better than a linear function of the field attributes. We also find query click information to be the single most important field for retrieval. In response, we develop a machine learning approach to BM25-style retrieval that learns, using LambdaRank, from the input attributes of BM25. Our model significantly improves retrieval effectiveness over BM25 and BM25F. Our data-driven approach is fast, effective, avoids the problem of parameter tuning, and can directly optimize for several common information retrieval measures. We demonstrate the advantages of our model on a very large real-world Web data collection." ] }
1606.04223
2440825761
Most Information Retrieval models compute the relevance score of a document for a given query by summing term weights specific to a document or a query. Heuristic approaches, like TF-IDF, or probabilistic models, like BM25, are used to specify how a term weight is computed. In this paper, we propose to leverage learning-to-rank principles to learn how to compute a term weight for a given document based on the term occurrence pattern.
@cite_6 proposed to use a convolutional neural network. Instead of starting from word embeddings, they did use letter tri-grams (so as to have a small size vocabulary), i.e. each document is represented by the count of the tri-grams occuring in it. The output is a fixed size representation vector that is used to compute the relevance to a query. @cite_22 extended their work by first computing the representation of word tri-grams, before using a (maximum over each dimension of the representation space) to represent the full document. Finally, @cite_0 used a recurrent neural network (RNN), the representation of a document or a query being the state of the RNN at the end of the processed sequence. Compared to our work, these approach need a great quantity of training data, and we believe they are not suited for many IR tasks dealing with precise named entities. In the context of question-answering, @cite_17 proposed to learn whether a sentence is an answer to a given query using a convolutional neural network, but had to introduce a set of query-document features to improve their results, such as the word overlap count.
{ "cite_N": [ "@cite_0", "@cite_22", "@cite_6", "@cite_17" ], "mid": [ "2142920810", "2131876387", "2136189984", "1966443646" ], "abstract": [ "This paper develops a model that addresses sentence embedding, a hot topic in current natural language processing research, using recurrent neural networks (RNN) with Long Short-Term Memory (LSTM) cells. The proposed LSTM-RNN model sequentially takes each word in a sentence, extracts its information, and embeds it into a semantic vector. Due to its ability to capture long term memory, the LSTM-RNN accumulates increasingly richer information as it goes through the sentence, and when it reaches the last word, the hidden layer of the network provides a semantic representation of the whole sentence. In this paper, the LSTM-RNN is trained in a weakly supervised manner on user click-through data logged by a commercial web search engine. Visualization and analysis are performed to understand how the embedding process works. The model is found to automatically attenuate the unimportant words and detect the salient keywords in the sentence. Furthermore, these detected keywords are found to automatically activate different cells of the LSTM-RNN, where words belonging to a similar topic activate the same cell. As a semantic representation of the sentence, the embedding vector can be used in many different applications. These automatic keyword detection and topic allocation abilities enabled by the LSTM-RNN allow the network to perform document retrieval, a difficult language processing task, where the similarity between the query and documents can be measured by the distance between their corresponding sentence embedding vectors computed by the LSTM-RNN. On a web search task, the LSTM-RNN embedding is shown to significantly outperform several existing state of the art methods. We emphasize that the proposed model generates sentence embedding vectors that are specially useful for web document retrieval tasks. A comparison with a well known general sentence embedding method, the Paragraph Vector, is performed. The results show that the proposed method in this paper significantly outperforms Paragraph Vector method for web document retrieval task.", "In this paper, we propose a new latent semantic model that incorporates a convolutional-pooling structure over word sequences to learn low-dimensional, semantic vector representations for search queries and Web documents. In order to capture the rich contextual structures in a query or a document, we start with each word within a temporal context window in a word sequence to directly capture contextual features at the word n-gram level. Next, the salient word n-gram features in the word sequence are discovered by the model and are then aggregated to form a sentence-level feature vector. Finally, a non-linear transformation is applied to extract high-level semantic information to generate a continuous vector representation for the full text string. The proposed convolutional latent semantic model (CLSM) is trained on clickthrough data and is evaluated on a Web document ranking task using a large-scale, real-world data set. Results show that the proposed model effectively captures salient semantic information in queries and documents for the task while significantly outperforming previous state-of-the-art semantic models.", "Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper.", "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3 absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers." ] }
1606.04223
2440825761
Most Information Retrieval models compute the relevance score of a document for a given query by summing term weights specific to a document or a query. Heuristic approaches, like TF-IDF, or probabilistic models, like BM25, are used to specify how a term weight is computed. In this paper, we propose to leverage learning-to-rank principles to learn how to compute a term weight for a given document based on the term occurrence pattern.
In parallel, Zheng and Callan @cite_13 proposed a term-independent representation of query terms to define the weight of each term. The central idea of their work is to represent each term of the query as the difference between the term vector and the mean of the vectors representing the terms of the query thus capturing the semantic difference between the term and the query. Our research is direction is orthogonal, since we are interested by the document weight and not the query one, but the idea of finding a term-independent representation inspired our present work.
{ "cite_N": [ "@cite_13" ], "mid": [ "1973289172" ], "abstract": [ "Term weighting is a fundamental problem in IR research and numerous weighting models have been proposed. Proper term weighting can greatly improve retrieval accuracies, which essentially involves two types of query understanding: interpreting the query and judging the relative contribution of the terms to the query. These two steps are often dealt with separately, and complicated yet not so effective weighting strategies are proposed. In this paper, we propose to address query interpretation and term weighting in a unified framework built upon distributed representations of words from recent advances in neural network language modeling. Specifically, we represent term and query as vectors in the same latent space, construct features for terms using their word vectors and learn a model to map the features onto the defined target term weights. The proposed method is simple yet effective. Experiments using four collections and two retrieval models demonstrates significantly higher retrieval accuracies than baseline models." ] }
1606.04082
2428735145
We present a general framework for Bayesian estimation of incompletely observed multivariate diffusion processes. Observations are assumed to be discrete in time, noisy and incomplete. We assume the drift and diffusion coefficient depend on an unknown parameter. A data-augmentation algorithm for drawing from the posterior distribution is presented which is based on simulating diffusion bridges conditional on a noisy incomplete observation at an intermediate time. The dynamics of such filtered bridges are derived and it is shown how these can be simulated using a generalised version of the guided proposals introduced in Schauer, Van der Meulen and Van Zanten (2017, Bernoulli 23(4A)).
A diffusion bridge is an infinite-dimensional random variable. The approach taken in @cite_8 and @cite_22 is to approximate this stochastic process by a finite-dimensional vector and next carry out simulation. @cite_10 call this the projection-simulation strategy and advocate the simulation-projection strategy where an appropriate Monte-Carlo scheme is designed that operates on the infinitely-dimensional space of diffusion bridges. For practical purposes it needs to be discretised but the discretisation error can be eliminated by letting the mesh-width tend to zero. This implies that the algorithm is valid when taking this limit. We refer to @cite_10 to a discussion on additional advantages of the simulation-projection strategy, which we will employ in this paper.
{ "cite_N": [ "@cite_10", "@cite_22", "@cite_8" ], "mid": [ "1527131282", "", "2134589175" ], "abstract": [ "This article develops a class of Monte Carlo (MC) methods for simulating conditioned diffusion sample paths, with special emphasis on importance sampling schemes. We restrict attention to a particular type of conditioned diffusions, the so-called diffusion bridge processes. The diffusion bridge is the process obtained by conditioning a diffusion to start and finish at specific values at two consecutive times t0 < t1.", "", "Diffusion processes governed by stochastic differential equations (SDEs) are a well-established tool for modelling continuous time data from a wide range of areas. Consequently, techniques have been developed to estimate diffusion parameters from partial and discrete observations. Likelihood-based inference can be problematic as closed form transition densities are rarely available. One widely used solution involves the introduction of latent data points between every pair of observations to allow a Euler-Maruyama approximation of the true transition densities to become accurate. In recent literature, Markov chain Monte Carlo (MCMC) methods have been used to sample the posterior distribution of latent data and model parameters; however, naive schemes suffer from a mixing problem that worsens with the degree of augmentation. A global MCMC scheme that can be applied to a large class of diffusions and whose performance is not adversely affected by the number of latent values is therefore explored. The methodology is illustrated by estimating parameters governing an auto-regulatory gene network, using partial and discrete data that are subject to measurement error." ] }
1606.04082
2428735145
We present a general framework for Bayesian estimation of incompletely observed multivariate diffusion processes. Observations are assumed to be discrete in time, noisy and incomplete. We assume the drift and diffusion coefficient depend on an unknown parameter. A data-augmentation algorithm for drawing from the posterior distribution is presented which is based on simulating diffusion bridges conditional on a noisy incomplete observation at an intermediate time. The dynamics of such filtered bridges are derived and it is shown how these can be simulated using a generalised version of the guided proposals introduced in Schauer, Van der Meulen and Van Zanten (2017, Bernoulli 23(4A)).
Besides potentially difficult simulation of diffusion bridges, there is another well known problem related to MCMC-algorithm for the problem considered. In case there are unknown parameters in the diffusion coefficient @math , any MCMC-scheme that includes the latent diffusion bridges leads to a scheme that is reducible. The reason for this is that a continuous sample path fixes the diffusion coefficient by means of its quadratic variation process. This phenomenon was first discussed in @cite_6 and a solution to it was proposed in both @cite_20 and @cite_8 within the projection-simulation setup. The resulting algorithm is referred to as the innovation scheme, as the innovations of the bridges are used as auxiliary data, instead of the discretised bridges themselves. A slightly more general solution was recently put forward in @cite_3 using the simulation-projection setup.
{ "cite_N": [ "@cite_20", "@cite_3", "@cite_6", "@cite_8" ], "mid": [ "2148381455", "2436131410", "2088413985", "2134589175" ], "abstract": [ "This paper provides methods for carrying out likelihood based inference for diffusion driven models, for example discretely observed multivariate diffusions, continuous time stochastic volatility models and counting process models. The diffusions can potentially be non-stationary. Although our methods are sampling based, making use of Markov chain Monte Carlo methods to sample the posterior distribution of the relevant unknowns, our general strategies and details are different from previous work along these lines. The methods we develop are simple to implement and simulation efficient. Importantly, unlike previous methods, the performance of our technique is not worsened, in fact it improves, as the degree of latent augmentation is increased to reduce the bias of the Euler approximation. In addition, our method is not subject to a degeneracy that afflicts previous techniques when the degree of latent augmentation is increased. We also discuss issues of model choice, model checking and filtering. The techniques and ideas are applied to both simulated and real data.", "Estimation of parameters of a diffusion based on discrete time observations poses a difficult problem due to the lack of a closed form expression for the likelihood. From a Bayesian computational perspective it can be casted as a missing data problem where the diffusion bridges in between discrete-time observations are missing. The computational problem can then be dealt with using a Markov-chain Monte-Carlo method known as data-augmentation. If unknown parameters appear in the diffusion coefficient, direct implementation of data-augmentation results in a Markov chain that is reducible. Furthermore, data-augmentation requires efficient sampling of diffusion bridges, which can be difficult, especially in the multidimensional case. We present a general framework to deal with with these problems that does not rely on discretisation. The construction generalises previous approaches and sheds light on the assumptions necessary to make these approaches work. We define a random-walk type Metropolis-Hastings sampler for updating diffusion bridges. Our methods are illustrated using guided proposals for sampling diffusion bridges. These are Markov processes obtained by adding a guiding term to the drift of the diffusion. We give general guidelines on the construction of these proposals and introduce a time change and scaling of the guided proposal that reduces discretisation error. Numerical examples demonstrate the performance of our methods.", "In this paper, we introduce a new Markov chain Monte Carlo approach to Bayesian analysis of discretely observed diffusion processes. We treat the paths between any two data points as missing data. As such, we show that, because of full dependence between the missing paths and the volatility of the diffusion, the rate of convergence of basic algorithms can be arbitrarily slow if the amount of the augmentation is large. We offer a transformation of the diffusion which breaks down dependency between the transformed missing paths and the volatility of the diffusion. We then propose two efficient Markov chain Monte Carlo algorithms to sample from the posterior-distribution of the transformed missing observations and the parameters of the diffusion. We apply our results to examples involving simulated data and also to Eurodollar short-rate data.", "Diffusion processes governed by stochastic differential equations (SDEs) are a well-established tool for modelling continuous time data from a wide range of areas. Consequently, techniques have been developed to estimate diffusion parameters from partial and discrete observations. Likelihood-based inference can be problematic as closed form transition densities are rarely available. One widely used solution involves the introduction of latent data points between every pair of observations to allow a Euler-Maruyama approximation of the true transition densities to become accurate. In recent literature, Markov chain Monte Carlo (MCMC) methods have been used to sample the posterior distribution of latent data and model parameters; however, naive schemes suffer from a mixing problem that worsens with the degree of augmentation. A global MCMC scheme that can be applied to a large class of diffusions and whose performance is not adversely affected by the number of latent values is therefore explored. The methodology is illustrated by estimating parameters governing an auto-regulatory gene network, using partial and discrete data that are subject to measurement error." ] }
1606.04289
2440757793
Automated Text Scoring (ATS) provides a cost-effective and consistent alternative to human marking. However, in order to achieve good performance, the predictive features of the system need to be manually engineered by human experts. We introduce a model that forms word representations by learning the extent to which specific words contribute to the text's score. Using Long-Short Term Memory networks to represent the meaning of texts, we demonstrate that a fully automated framework is able to achieve excellent results over similar approaches. In an attempt to make our results more interpretable, and inspired by recent advances in visualizing neural networks, we introduce a novel method for identifying the regions of the text that the model has found more discriminative.
In 2012, Kaggle, http: www.kaggle.com c asap-aes sponsored by the Hewlett Foundation, hosted the Automated Student Assessment Prize (ASAP) contest, aiming to demonstrate the capabilities of automated text scoring systems @cite_15 . The dataset released consists of around twenty thousand texts ($60 -school English-speaking students, which we use as part of our experiments to develop our models.
{ "cite_N": [ "@cite_15" ], "mid": [ "2001744998" ], "abstract": [ "This study compared short-form constructed responses evaluated by both human raters and machine scoring algorithms. The context was a public competition on which both public competitors and commercial vendors vied to develop machine scoring algorithms that would match or exceed the performance of operational human raters in a summative high-stakes testing environment. Data (N = 25,683) were drawn from three different states, employed 10 different prompts, and were drawn from two different secondary grade levels. Samples ranging in size from 2,130 to 2,999 were randomly selected from the data sets provided by the states and then randomly divided into three sets: a training set, a test set, and a validation set. Machine performance on all of the agreement measures failed to match that of the human raters. The current study concluded with recommendations on steps that might improve machine-scoring algorithms before they can be used in any operational way." ] }
1606.03713
2428063894
The main challenges in large-scale people tracking are the recognition of people density in a specific area and tracking the people flow path. To address these challenges, we present SenseFlow, a lightweight people tracking system. SenseFlow utilises off-the-shelf devices which sniff probe requests periodically polled by user's smartphones in a passive manner. We demonstrate the feasibility of SenseFlow by building a proof-of-concept prototype and undertaking extensive evaluations in real-world settings. We deploy the system in one laboratory to study office hours of researchers, a crowded public area in city to evaluate the scalability and performance "in the wild", and four classrooms in the university to monitor the number of students. We also evaluate SenseFlow with varying walking speeds and different models of smartphones to investigate the people flow tracking performance.
The GPS-based localisation system is widely used for outdoor position determination and this technology is currently implemented in many mobile devices @cite_2 . Unfortunately, the main challenge in indoor environments is the unavailability of GPS signals since the technology requests for Line-of-Sight when connecting to satellites. In addition, such system requires the user to install an application on the mobile device in order to enable GPS localisation, which does not track people in a passive way.
{ "cite_N": [ "@cite_2" ], "mid": [ "2006036128" ], "abstract": [ "In recent years the need for indoor localisation has increased. Earlier systems have been deployed in order to demonstrate that indoor localisation can be done. Many researchers are referring to location estimation as a crucial component in numerous applications. There is no standard in indoor localisation thus the selection of an existing system needs to be done based on the environment being tracked, the accuracy and the precision required. Modern localisation systems use various techniques such as Received Signal Strength Indicator (RSSI), Time of Arrival (TOA), Time Difference of Arrival (TDOA) and Angle of Arrival (AOA). This paper is a survey of various active and passive localisation techniques developed over the years. The majority of the localisation techniques are part of the active systems class due to the necessity of tags electronic devices carried by the person being tracked or mounted on objects in order to estimate their position. The second class called passive localisation represents the estimation of a person's position without the need for a physical device i.e. tags or sensors. The assessment of the localisation systems is based on the wireless technology used, positioning algorithm, accuracy and precision, complexity, scalability and costs. In this paper we are comparing various systems presenting their advantages and disadvantages." ] }
1606.03713
2428063894
The main challenges in large-scale people tracking are the recognition of people density in a specific area and tracking the people flow path. To address these challenges, we present SenseFlow, a lightweight people tracking system. SenseFlow utilises off-the-shelf devices which sniff probe requests periodically polled by user's smartphones in a passive manner. We demonstrate the feasibility of SenseFlow by building a proof-of-concept prototype and undertaking extensive evaluations in real-world settings. We deploy the system in one laboratory to study office hours of researchers, a crowded public area in city to evaluate the scalability and performance "in the wild", and four classrooms in the university to monitor the number of students. We also evaluate SenseFlow with varying walking speeds and different models of smartphones to investigate the people flow tracking performance.
Camera-based system was proposed to address the people tracking using thermal infrared, stereo and time of flight camera @cite_15 @cite_7 . The vast majority of human-detection approaches currently deployed in camera-based systems rely on background subtraction, pattern matching and face recognition, which can process the conventional images from the camera. However, these systems are affected by lighting variations and shadows. Moreover, camera-based system has limited coverage due to a fixed location and angle @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_7" ], "mid": [ "2044049786", "1983363688", "" ], "abstract": [ "The coverage optimization problem has been examined thoroughly for omni-directional sensor networks in the past decades. However, the coverage problem in directional sensor networks (DSN) has newly taken attraction, especially with the increasing number of wireless multimedia sensor network (WMSN) applications. Directional sensor nodes equipped with ultrasound, infrared, and video sensors differ from traditional omni-directional sensor nodes with their unique characteristics, such as angle of view, working direction, and line of sight (LoS) properties. Therefore, DSN applications require specific solutions and techniques for coverage enhancement. In this survey article, we mainly aim at categorizing available coverage optimization solutions and survey their problem definitions, assumptions, contributions, complexities and performance results. We categorize available studies about coverage enhancement into four categories. Target-based coverage enhancement, area-based coverage enhancement, coverage enhancement with guaranteed connectivity, and network lifetime prolonging. We define sensing models, design issues and challenges for directional sensor networks and describe their (dis)similarities to omni-directional sensor networks. We also give some information on the physical capabilities of directional sensors available on the market. Moreover, we specify the (dis)advantages of motility and mobility in terms of the coverage and network lifetime of DSNs.", "Analysis of human behaviour through visual information has been a highly active research topic in the computer vision community. This was previously achieved via images from a conventional camera, however recently depth sensors have made a new type of data available. This survey starts by explaining the advantages of depth imagery, then describes the new sensors that are available to obtain it. In particular, the Microsoft Kinect has made high-resolution real-time depth cheaply available. The main published research on the use of depth imagery for analysing human activity is reviewed. Much of the existing work focuses on body part detection and pose estimation. A growing research area addresses the recognition of human actions. The publicly available datasets that include depth imagery are listed, as are the software libraries that can acquire it from a sensor. This survey concludes by summarising the current state of work on this topic, and pointing out promising future research directions. For both researchers and practitioners who are familiar with this topic and those who are new to this field, the review will aid in the selection, and development, of algorithms using depth data.", "" ] }
1606.03713
2428063894
The main challenges in large-scale people tracking are the recognition of people density in a specific area and tracking the people flow path. To address these challenges, we present SenseFlow, a lightweight people tracking system. SenseFlow utilises off-the-shelf devices which sniff probe requests periodically polled by user's smartphones in a passive manner. We demonstrate the feasibility of SenseFlow by building a proof-of-concept prototype and undertaking extensive evaluations in real-world settings. We deploy the system in one laboratory to study office hours of researchers, a crowded public area in city to evaluate the scalability and performance "in the wild", and four classrooms in the university to monitor the number of students. We also evaluate SenseFlow with varying walking speeds and different models of smartphones to investigate the people flow tracking performance.
Apart from cameras, other devices used for people tracking are range finders, such as radar, and sonar. @cite_10 , proposes people tracking by using multiple layers of 2D laser range scans. @cite_19 presents a valuable analysis of pedestrian detection in urban scenario using exclusively lidar-based features. Unfortunately, the impressionable wave and laser signal lead a large number of false negatives @cite_4 .
{ "cite_N": [ "@cite_19", "@cite_10", "@cite_4" ], "mid": [ "2102691275", "2125114795", "" ], "abstract": [ "Reliable detection and classification of vulnerable road users constitute a critical issue on safety protection systems for intelligent vehicles driving in urban zones. In this subject, most of the perception systems have LIDAR and or Radar as primary detection modules and vision-based systems for object classification. This work, on the other hand, presents a valuable analysis of pedestrian detection in urban scenario using exclusively LIDAR-based features. The aim is to explore how much information can be extracted from LIDAR sensors for pedestrian detection. Moreover, this study will be useful to compose multi-sensor based pedestrian detection systems using not only LIDAR but also vision sensors. Experimental results using our data set and a detailed classification performance analysis are presented, with comparisons among various classification techniques.", "People detection is a key capacity for robotics systems that have to interact with humans. This paper addresses the problem of detecting people using multiple layers of 2D laser range scans. Each layer contains a classifier able to detect a particular body part such as a head, an upper body or a leg. These classifiers are learned using a supervised approach based on AdaBoost. The final person detector is composed of a probabilistic combination of the outputs from the different classifiers. Experimental results with real data demonstrate the effectiveness of our approach to detect persons in indoor environments and its ability to deal with occlusions.", "" ] }
1606.03713
2428063894
The main challenges in large-scale people tracking are the recognition of people density in a specific area and tracking the people flow path. To address these challenges, we present SenseFlow, a lightweight people tracking system. SenseFlow utilises off-the-shelf devices which sniff probe requests periodically polled by user's smartphones in a passive manner. We demonstrate the feasibility of SenseFlow by building a proof-of-concept prototype and undertaking extensive evaluations in real-world settings. We deploy the system in one laboratory to study office hours of researchers, a crowded public area in city to evaluate the scalability and performance "in the wild", and four classrooms in the university to monitor the number of students. We also evaluate SenseFlow with varying walking speeds and different models of smartphones to investigate the people flow tracking performance.
@cite_25 , the social links in a venue of large political and religious gatherings are studied from the probe requests. A database that associates each device is built, as identified by its MAC address, to the list of SSIDs derived from its probe requests. Moreover, an automated methodology is proposed to learn the social links of mobile devices given that two users sharing one or more SSIDs indicate a potential social relationship between the two.
{ "cite_N": [ "@cite_25" ], "mid": [ "2061480123" ], "abstract": [ "The ever increasing ubiquitousness of WiFi access points, coupled with the diffusion of smartphones, suggest that Internet every time and everywhere will soon (if not already has) become a reality. Even in presence of 3G connectivity, our devices are built to switch automatically to WiFi networks so to improve user experience. Most of the times, this is achieved by recurrently broadcasting automatic connectivity requests (known as Probe Requests) to known access points (APs), like, e.g., \"Home WiFi\", \"Campus WiFi\", and so on. In a large gathering of people, the number of these probes can be very high. This scenario rises a natural question: \"Can significant information on the social structure of a large crowd and on its socioeconomic status be inferred by looking at smartphone probes?\". In this work we give a positive answer to this question. We organized a 3-months long campaign, through which we collected around 11M probes sent by more than 160K different devices. During the campaign we targeted national and international events that attracted large crowds as well as other gatherings of people. Then, we present a simple and automatic methodology to build the underlying social graph of the smartphone users, starting from their probes. We do so for each of our target events, and find that they all feature social-network properties. In addition, we show that, by looking at the probes in an event, we can learn important sociological aspects of its participants---language, vendor adoption, and so on." ] }
1606.03864
2428505524
Many important NLP problems can be posed as dual-sequence or sequence-to-sequence modeling tasks. Recent advances in building end-to-end neural architectures have been highly successful in solving such tasks. In this work we propose a new architecture for dual-sequence modeling that is based on associative memory. We derive AM-RNNs, a recurrent associative memory (AM) which augments generic recurrent neural networks (RNN). This architecture is extended to the Dual AM-RNN which operates on two AMs at once. Our models achieve very competitive results on textual entailment. A qualitative analysis demonstrates that long range dependencies between source and target-sequence can be bridged effectively using Dual AM-RNNs. However, an initial experiment on auto-encoding reveals that these benefits are not exploited by the system when learning to solve sequence-to-sequence tasks which indicates that additional supervision or regularization is needed.
Augmenting RNNs by the use of memory is not novel. introduced Neural Turing Machines which augment RNNs with external memory that can be written to and read from. It contains a predefined number of slots to write content to. This form of memory is addressable via content or position shifts. Neural Turing Machines inspired subsequent work on using different kinds of external memory, like queues or stacks @cite_12 . Operations on these memories are calculated via a recurrent controller which is decoupled from the memory whereas AM-RNNs apply the RNN -function directly upon the content of the associative memory.
{ "cite_N": [ "@cite_12" ], "mid": [ "1602017060" ], "abstract": [ "Recently, strong results have been demonstrated by Deep Recurrent Neural Networks on natural language transduction problems. In this paper we explore the representational power of these models using synthetic grammars designed to exhibit phenomena similar to those found in real transduction problems such as machine translation. These experiments lead us to propose new memory-based recurrent networks that implement continuously differentiable analogues of traditional data structures such as Stacks, Queues, and DeQues. We show that these architectures exhibit superior generalisation performance to Deep RNNs and are often able to learn the underlying generating algorithms in our transduction experiments." ] }
1606.03864
2428505524
Many important NLP problems can be posed as dual-sequence or sequence-to-sequence modeling tasks. Recent advances in building end-to-end neural architectures have been highly successful in solving such tasks. In this work we propose a new architecture for dual-sequence modeling that is based on associative memory. We derive AM-RNNs, a recurrent associative memory (AM) which augments generic recurrent neural networks (RNN). This architecture is extended to the Dual AM-RNN which operates on two AMs at once. Our models achieve very competitive results on textual entailment. A qualitative analysis demonstrates that long range dependencies between source and target-sequence can be bridged effectively using Dual AM-RNNs. However, an initial experiment on auto-encoding reveals that these benefits are not exploited by the system when learning to solve sequence-to-sequence tasks which indicates that additional supervision or regularization is needed.
AM-RNNs also have an interesting connection to LSTM-Networks @cite_15 which recently demonstrated impressive results on various text modeling tasks. LSTM-Networks (LSTMN) select a previous hidden state via attention on a memory tape of past states (intra-attention) opposed to using the hidden state of the previous time step. The same idea is implicitly present in our architecture by retrieving a previous state via a computed key from the associative memory (Equation ). The main difference lies in the used memory architecture. We use a fixed size memory array in contrast to a dynamically growing memory tape which requires growing computational and memory resources. The drawback of our approach, however, is the potential loss of explicit memories due to retrieval noise or overwriting.
{ "cite_N": [ "@cite_15" ], "mid": [ "2952191002" ], "abstract": [ "Machine reading, the automatic understanding of text, remains a challenging task of great value for NLP applications. We propose a machine reader which processes text incrementally from left to right, while linking the current word to previous words stored in memory and implicitly discovering lexical dependencies facilitating understanding. The reader is equipped with a Long Short-Term Memory architecture, which differs from previous work in that it has a memory tape (instead of a memory cell) for adaptively storing past information without severe information compression. We also integrate our reader with a new attention mechanism in encoder-decoder architecture. Experiments on language modeling, sentiment analysis, and natural language inference show that our model matches or outperforms the state of the art." ] }
1606.03561
2430631925
Social media has now become the de facto information source on real world events. The challenge, however, due to the high volume and velocity nature of social media streams, is in how to follow all posts pertaining to a given event over time, a task referred to as story detection. Moreover, there are often several different stories pertaining to a given event, which we refer to as sub-stories and the corresponding task of their automatic detection as sub-story detection. This paper proposes hierarchical Dirichlet processes (HDP), a probabilistic topic model, as an effective method for automatic sub-story detection. HDP can learn sub-topics associated with sub-stories which enables it to handle subtle variations in sub-stories. It is compared with state- of-the-art story detection approaches based on locality sensitive hashing and spectral clustering. We demonstrate the superior performance of HDP for sub-story detection on real world Twitter data sets using various evaluation measures. The ability of HDP to learn sub-topics helps it to recall the sub- stories with high precision. Another contribution of this paper is in demonstrating that the conversational structures within the Twitter stream can be used to improve sub-story detection performance significantly.
A number of techniques have been employed for detecting and tracking stories in social media streams @cite_31 . Story detection is typically done by extending traditional clustering algorithms to a streaming data setting @cite_18 . A comprehensive survey of the literature on story detection techniques in Twitter data is given in @cite_39 .
{ "cite_N": [ "@cite_31", "@cite_18", "@cite_39" ], "mid": [ "", "2031655803", "1890727290" ], "abstract": [ "", "The large amount of text data which are continuously produced over time in a variety of large scale applications such as social networks results in massive streams of data. Typically massive text streams are created by very large scale interactions of individuals, or by structured creations of particular kinds of content by dedicated organizations. An example in the latter category would be the massive text streams created by news-wire services. Such text streams provide unprecedented challenges to data mining algorithms from an efficiency perspective. In this paper, we review text stream mining algorithms for a wide variety of problems in data mining such as clustering, classification and topic modeling. A recent challenge arises in the context of social streams, which are generated by large social networks such as Twitter. We also discuss a number of future challenges in this area of research.", "Twitter is among the fastest-growing microblogging and online social networking services. Messages posted on Twitter tweets have been reporting everything from daily life stories to the latest local and global news and events. Monitoring and analyzing this rich and continuous user-generated content can yield unprecedentedly valuable information, enabling users and organizations to acquire actionable knowledge. This article provides a survey of techniques for event detection from Twitter streams. These techniques aim at finding real-world occurrences that unfold over space and time. In contrast to conventional media, event detection from Twitter streams poses new challenges. Twitter streams contain large amounts of meaningless messages and polluted content, which negatively affect the detection performance. In addition, traditional text mining techniques are not suitable, because of the short length of tweets, the large number of spelling and grammatical errors, and the frequent use of informal and mixed language. Event detection techniques presented in literature address these issues by adapting techniques from various fields to the uniqueness of Twitter. This article classifies these techniques according to the event type, detection task, and detection method and discusses commonly used features. Finally, it highlights the need for public benchmarks to evaluate the performance of different detection approaches and various features." ] }
1606.03561
2430631925
Social media has now become the de facto information source on real world events. The challenge, however, due to the high volume and velocity nature of social media streams, is in how to follow all posts pertaining to a given event over time, a task referred to as story detection. Moreover, there are often several different stories pertaining to a given event, which we refer to as sub-stories and the corresponding task of their automatic detection as sub-story detection. This paper proposes hierarchical Dirichlet processes (HDP), a probabilistic topic model, as an effective method for automatic sub-story detection. HDP can learn sub-topics associated with sub-stories which enables it to handle subtle variations in sub-stories. It is compared with state- of-the-art story detection approaches based on locality sensitive hashing and spectral clustering. We demonstrate the superior performance of HDP for sub-story detection on real world Twitter data sets using various evaluation measures. The ability of HDP to learn sub-topics helps it to recall the sub- stories with high precision. Another contribution of this paper is in demonstrating that the conversational structures within the Twitter stream can be used to improve sub-story detection performance significantly.
Story detection in Twitter for a particular topic such as earthquake' is studied in @cite_3 . Becket [2011] use an online clustering algorithm to detect stories and distinguish real vs. non-real stories using a classification method. Twevent @cite_5 is a story detection approach which clusters bursty segments in Twitter data. A fast and efficient approach based on locality sensitive hashing (LSH) is first used in @cite_29 to detect the emergence of new stories (first story detection) in Twitter. Locality sensitive hashing reduced the computational complexity associated with nearest neighbor search and detected clusters of documents in constant space and time. Later, they extended this approach to counter lexical variations in documents by using paraphrases @cite_35 . An alternative approach to detect new events by storing the contents of already seen documents in a single hash table is proposed in @cite_30 . Further, LSH based techniques are also developed to handle topic streams emerging in Twitter @cite_25 . Here, topics are also hashed into a bucket in addition to the tweets.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_29", "@cite_3", "@cite_5", "@cite_25" ], "mid": [ "2250752175", "", "", "2124499489", "2168400688", "2250433666" ], "abstract": [ "First Story Detection is hard because the most accurate systems become progressively slower with each document processed. We present a novel approach to FSD, which operates in constant time space and scales to very high volume streams. We show that when computing novelty over a large dataset of tweets, our method performs 192 times faster than a state-of-the-art baseline without sacrificing accuracy. Our method is capable of performing FSD on the full Twitter stream on a single core of modest hardware.", "", "", "Twitter, a popular microblogging service, has received much attention recently. An important characteristic of Twitter is its real-time nature. For example, when an earthquake occurs, people make many Twitter posts (tweets) related to the earthquake, which enables detection of earthquake occurrence promptly, simply by observing the tweets. As described in this paper, we investigate the real-time interaction of events such as earthquakes in Twitter and propose an algorithm to monitor tweets and to detect a target event. To detect a target event, we devise a classifier of tweets based on features such as the keywords in a tweet, the number of words, and their context. Subsequently, we produce a probabilistic spatiotemporal model for the target event that can find the center and the trajectory of the event location. We consider each Twitter user as a sensor and apply Kalman filtering and particle filtering, which are widely used for location estimation in ubiquitous pervasive computing. The particle filter works better than other comparable methods for estimating the centers of earthquakes and the trajectories of typhoons. As an application, we construct an earthquake reporting system in Japan. Because of the numerous earthquakes and the large number of Twitter users throughout the country, we can detect an earthquake with high probability (96 of earthquakes of Japan Meteorological Agency (JMA) seismic intensity scale 3 or more are detected) merely by monitoring tweets. Our system detects earthquakes promptly and sends e-mails to registered users. Notification is delivered much faster than the announcements that are broadcast by the JMA.", "Event detection from tweets is an important task to understand the current events topics attracting a large number of common users. However, the unique characteristics of tweets (e.g. short and noisy content, diverse and fast changing topics, and large data volume) make event detection a challenging task. Most existing techniques proposed for well written documents (e.g. news articles) cannot be directly adopted. In this paper, we propose a segment-based event detection system for tweets, called Twevent. Twevent first detects bursty tweet segments as event segments and then clusters the event segments into events considering both their frequency distribution and content similarity. More specifically, each tweet is split into non-overlapping segments (i.e. phrases possibly refer to named entities or semantically meaningful information units). The bursty segments are identified within a fixed time window based on their frequency patterns, and each bursty segment is described by the set of tweets containing the segment published within that time window. The similarity between a pair of bursty segments is computed using their associated tweets. After clustering bursty segments into candidate events, Wikipedia is exploited to identify the realistic events and to derive the most newsworthy segments to describe the identified events. We evaluate Twevent and compare it with the state-of-the-art method using 4.3 million tweets published by Singapore-based users in June 2010. In our experiments, Twevent outperforms the state-of-the-art method by a large margin in terms of both precision and recall. More importantly, the events detected by Twevent can be easily interpreted with little background knowledge because of the newsworthy segments. We also show that Twevent is efficient and scalable, leading to a desirable solution for event detection from tweets.", "Tracking topics on social media streams is non-trivial as the number of topics mentioned grows without bound. This complexity is compounded when we want to track such topics against other fast moving streams. We go beyond traditional small scale topic tracking and consider a stream of topics against another document stream. We introduce two tracking approaches which are fully applicable to true streaming environments. When tracking 4.4 million topics against 52 million documents in constant time and space, we demonstrate that counter to expectations, simple single-pass clustering can outperform locality sensitive hashing for nearest neighbour search on streams." ] }
1606.03561
2430631925
Social media has now become the de facto information source on real world events. The challenge, however, due to the high volume and velocity nature of social media streams, is in how to follow all posts pertaining to a given event over time, a task referred to as story detection. Moreover, there are often several different stories pertaining to a given event, which we refer to as sub-stories and the corresponding task of their automatic detection as sub-story detection. This paper proposes hierarchical Dirichlet processes (HDP), a probabilistic topic model, as an effective method for automatic sub-story detection. HDP can learn sub-topics associated with sub-stories which enables it to handle subtle variations in sub-stories. It is compared with state- of-the-art story detection approaches based on locality sensitive hashing and spectral clustering. We demonstrate the superior performance of HDP for sub-story detection on real world Twitter data sets using various evaluation measures. The ability of HDP to learn sub-topics helps it to recall the sub- stories with high precision. Another contribution of this paper is in demonstrating that the conversational structures within the Twitter stream can be used to improve sub-story detection performance significantly.
Topic models are also used to detect stories in Twitter, for instance latent Dirichlet allocation (LDA) @cite_10 to detect trending topics. A non-parametric topic model based on Dirichlet process is used in @cite_1 to detect newsworthy stories in Twitter, where topics are shared among tweets from consecutive time periods. Topicsketch @cite_12 uses a novel sketch based topic model to detect bursty topics from millions of tweets.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_12" ], "mid": [ "", "2137553870", "2011949237" ], "abstract": [ "", "Microblogs such as Twitter reflect the general public's reactions to major events. Bursty topics from microblogs reveal what events have attracted the most online attention. Although bursty event detection from text streams has been studied before, previous work may not be suitable for microblogs because compared with other text streams such as news articles and scientific publications, microblog posts are particularly diverse and noisy. To find topics that have bursty patterns on microblogs, we propose a topic model that simultaneously captures two observations: (1) posts published around the same time are more likely to have the same topic, and (2) posts published by the same user are more likely to have the same topic. The former helps find event-driven posts while the latter helps identify and filter out \"personal\" posts. Our experiments on a large Twitter dataset show that there are more meaningful and unique bursty topics in the top-ranked results returned by our model than an LDA baseline and two degenerate variations of our model. We also show some case studies that demonstrate the importance of considering both the temporal information and users' personal interests for bursty topic detection from microblogs.", "Twitter has become one of the largest platforms for users around the world to share anything happening around them with friends and beyond. A bursty topic in Twitter is one that triggers a surge of relevant tweets within a short time, which often reflects important events of mass interest. How to leverage Twitter for early detection of bursty topics has therefore become an important research problem with immense practical value. Despite the wealth of research work on topic modeling and analysis in Twitter, it remains a huge challenge to detect bursty topics in real-time. As existing methods can hardly scale to handle the task with the tweet stream in real-time, we propose in this paper Topic Sketch, a novel sketch-based topic model together with a set of techniques to achieve real-time detection. We evaluate our solution on a tweet stream with over 30 million tweets. Our experiment results show both efficiency and effectiveness of our approach. Especially it is also demonstrated that Topic Sketch can potentially handle hundreds of millions tweets per day which is close to the total number of daily tweets in Twitter and present bursty event in finer-granularity." ] }
1606.03634
2444096492
A backbone of a boolean formula @math is a collection @math of its variables for which there is a unique partial assignment @math such that @math is satisfiable [MZK+99,WGS03]. This paper studies the nontransparency of backbones. We show that, under the widely believed assumption that integer factoring is hard, there exist sets of boolean formulas that have obvious, nontrivial backbones yet finding the values, @math , of those backbones is intractable. We also show that, under the same assumption, there exist sets of boolean formulas that obviously have large backbones yet producing such a backbone @math is intractable. Further, we show that if integer factoring is not merely worst-case hard but is frequently hard, as is widely believed, then the frequency of hardness in our two results is not too much less than that frequency.
Even in the theoretical computer science world, where Borodin and Demers's work is set, the work has been very rarely used. In particular, it has been used to get characterizations regarding unambiguous computation @cite_6 , and Rothe and his collaborators have used it in various contexts to study the complexity of certificates @cite_11 @cite_10 , see also and Valiant .
{ "cite_N": [ "@cite_10", "@cite_6", "@cite_11" ], "mid": [ "2410312262", "2100570449", "" ], "abstract": [ "In dieser Habilitationsschrift werden Struktur und Eigenschaften von Komplexitatsklassen wie P und NP untersucht, vor allem im Hinblick auf: Zertifikatkomplexitat, Einwegfunktionen, Heuristiken gegen NP-Vollstandigkeit und Zahlkomplexitat. Zum letzten Punkt werden speziell untersucht: (a) die Komplexitat von Zahleigenschaften von Schaltkreisen, (b) Separationen von Zahlklassen mit Immunitat und (c) die Komplexitat des Zahlens der Losungen von ,,tally NP-Problemen.", "This paper develops techniques for studying complexity classes that are not covered by known recursive enumerations of machines. Often, counting classes, probabilistic classes, and intersection classes lack such enumerations. Concentrating on the counting class UP, we show that there are relativizations for which UPA has no complete languages and other relativizations for which PB ≠ UPB ≠ NPB and UPB has complete languages. Among other results we show that P ≠ UP if and only if there exists a set S in P of Boolean formulas with at most one satisfying assignment such that S ∩ SAT is not in P. P ≠ UP ∩ coUP if and only if there exists a set S in P of uniquely satisfiable Boolean formulas such that no polynomial-time machine can compute the solutions for the formulas in S. If UP has complete languages then there exists a set R in P of Boolean formulas with at most one satisfying assignment so that SAT ∩ R is complete for UP. Finally, we indicate the wide applicability of our techniques to counting and probabilistic classes by using them to examine the probabilistic class BPP. There is a relativized world where BPPA has no complete languages. If BPP has complete languages then it has a complete language of the form B ∩ MAJORITY, where B ∈ P and MAJORITY = f | f is true for at least half of all assignments is the canonical PP-complete set.", "" ] }
1606.03634
2444096492
A backbone of a boolean formula @math is a collection @math of its variables for which there is a unique partial assignment @math such that @math is satisfiable [MZK+99,WGS03]. This paper studies the nontransparency of backbones. We show that, under the widely believed assumption that integer factoring is hard, there exist sets of boolean formulas that have obvious, nontrivial backbones yet finding the values, @math , of those backbones is intractable. We also show that, under the same assumption, there exist sets of boolean formulas that obviously have large backbones yet producing such a backbone @math is intractable. Further, we show that if integer factoring is not merely worst-case hard but is frequently hard, as is widely believed, then the frequency of hardness in our two results is not too much less than that frequency.
There has been just one paper that previously has sought to bring the focus of this line to a topic of interest in AI @. Although it appeared in a theoretical computer science venue, the work of Hemaspaandra, Hemaspaandra, and Menton shows that some problems from computational social choice theory, a subarea of multiagent systems, have the property that if @math then their search versions are not polynomial-time Turing reducible to their decision problems---a rare behavior among the most familiar seemingly hard sets in computer science, since so-called self-reducibility @cite_8 is known to preclude that possibility for most standard NP-complete problems. The key issue that 2013 paper left open is whether the type of techniques it used, descended from Borodin and Demers , might be relevant anywhere else in AI, or whether its results were a one-shot oddity. The present paper in effect is arguing that the former is the case. Backbones are a topic important in AI and relevant to @math solvers, and this paper shows that the inspiration of the line of work initiated by Borodin and Demers can be used to establish the opacity of backbones.
{ "cite_N": [ "@cite_8" ], "mid": [ "1924765640" ], "abstract": [ "Most theoretical definitions about the complexity of manipulating elections focus on the decision problem of recognizing which instances can be successfully manipulated, rather than the search problem of finding the successful manipulative actions. Since the latter is a far more natural goal for manipulators, that definitional focus may be misguided if these two complexities can differ. Our main result is that they probably do differ: If integer factoring is hard, then for election manipulation, election bribery, and some types of election control, there are election systems for which recognizing which instances can be successfully manipulated is in polynomial time but producing the successful manipulations cannot be done in polynomial time." ] }
1606.03784
2438911840
We describe MITRE's submission to the SemEval-2016 Task 6, Detecting Stance in Tweets. This effort achieved the top score in Task A on supervised stance detection, producing an average F1 score of 67.8 when assessing whether a tweet author was in favor or against a topic. We employed a recurrent neural network initialized with features learned via distant supervision on two large unlabeled datasets. We trained embeddings of words and phrases with the word2vec skip-gram method, then used those features to learn sentence representations via a hashtag prediction auxiliary task. These sentence vectors were then fine-tuned for stance detection on several hundred labeled examples. The result was a high performing system that used transfer learning to maximize the value of the available training data.
Deep neural networks trained for image classification can be improved when initialized with features learned from distant tasks, for example . In natural language processing domains, sentence representations learned on unlabeled data have been shown to be useful across a variety of classification and semantic similarity tasks @cite_6 @cite_2 @cite_9 . used a hashtag prediction task to learn sentence representations that improve a downstream content-based recommendation system.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_2" ], "mid": [ "2271328876", "2949667497", "" ], "abstract": [ "Unsupervised methods for learning distributed representations of words are ubiquitous in today's NLP research, but far less is known about the best ways to learn distributed phrase or sentence representations from unlabelled data. This paper is a systematic comparison of models that learn such representations. We find that the optimal approach depends critically on the intended application. Deeper, more complex models are preferable for representations to be used in supervised systems, but shallow log-linear models work best for building representation spaces that can be decoded with simple spatial distance metrics. We also propose two new unsupervised representation-learning objectives designed to optimise the trade-off between training time, domain portability and performance.", "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.", "" ] }
1606.03784
2438911840
We describe MITRE's submission to the SemEval-2016 Task 6, Detecting Stance in Tweets. This effort achieved the top score in Task A on supervised stance detection, producing an average F1 score of 67.8 when assessing whether a tweet author was in favor or against a topic. We employed a recurrent neural network initialized with features learned via distant supervision on two large unlabeled datasets. We trained embeddings of words and phrases with the word2vec skip-gram method, then used those features to learn sentence representations via a hashtag prediction auxiliary task. These sentence vectors were then fine-tuned for stance detection on several hundred labeled examples. The result was a high performing system that used transfer learning to maximize the value of the available training data.
Previous work in stance detection is significant @cite_10 , often with a focus on analysis of congressional debates or online forums @cite_5 @cite_11 @cite_7 @cite_12 in which discourse and dialogue features offer clues for identifying oppositional speakers. Rajadesingan and Liu study stance detection in Twitter conversations and use a retweet-based label propagation approach. This objective of this work differs in that we attempt to detect an author's stance purely from analysis of the text of a single message.
{ "cite_N": [ "@cite_7", "@cite_5", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "1563284214", "1967807490", "2340372642", "", "2150850338" ], "abstract": [ "We propose a method for the task of identifying the general positions of users in online debates, i.e., support or oppose the main topic of an online debate, by exploiting local information in their remarks within the debate. An online debate is a forum where each user post an opinion on a particular topic while other users state their positions by posting their remarks within the debate. The supporting or opposing remarks are made by directly replying to the opinion, or indirectly to other remarks (to express local agreement or disagreement), which makes the task of identifying users' general positions difficult. A prior study has shown that a link-based method, which completely ignores the content of the remarks, can achieve higher accuracy for the identification task than methods based solely on the contents of the remarks. In this paper, we show that utilizing the textual content of the remarks into the link-based method can yield higher accuracy in the identification task.", "We investigate whether one can determine from the transcripts of U.S. Congressional floor debates whether the speeches represent support of or opposition to proposed legislation. To address this problem, we exploit the fact that these speeches occur as part of a discussion; this allows us to use sources of information regarding relationships between discourse segments, such as whether a given utterance indicates agreement with the opinion expressed by another. We find that the incorporation of such information yields substantial improvements over classifying speeches in isolation.", "Abstract Sentiment analysis is the task of automatically determining from text the attitude, emotion, or some other affectual state of the author. This chapter summarizes the diverse landscape of tasks and applications associated with sentiment analysis. We outline key challenges stemming from the complexity and subtlety of language use, the prevalence of creative and non-standard language, and the lack of paralinguistic information, such as tone and stress markers. We describe automatic systems and datasets commonly used in sentiment analysis. We summarize several manual and automatic approaches to creating valence- and emotion-association lexicons. We also discuss preliminary approaches for sentiment composition (how smaller units of text combine to express sentiment) and approaches for detecting sentiment in figurative and metaphoric language—these are the areas where we expect to see significant work in the near future.", "", "This paper presents an unsupervised opinion analysis method for debate-side classification, i.e., recognizing which stance a person is taking in an online debate. In order to handle the complexities of this genre, we mine the web to learn associations that are indicative of opinion stances in debates. We combine this knowledge with discourse information, and formulate the debate side classification task as an Integer Linear Programming problem. Our results show that our method is substantially better than challenging baseline methods." ] }
1606.03669
2399028358
Sky cloud images captured by ground-based cameras (a.k.a. whole sky imagers) are increasingly used nowadays because of their applications in a number of fields, including climate modeling, weather prediction, renewable energy generation, and satellite communications. Due to the wide variety of cloud types and lighting conditions in such images, accurate and robust segmentation of clouds is challenging. In this paper, we present a supervised segmentation framework for ground-based sky cloud images based on a systematic analysis of different color spaces and components, using partial least squares (PLS) regression. Unlike other state-of-the-art methods, our proposed approach is entirely learning-based and does not require any manually-defined parameters. In addition, we release the Singapore Whole Sky IMaging SEGmentation Database (SWIMSEG), a large database of annotated sky cloud images, to the research community.
As color is the most discriminating feature in sky cloud images, most works in the literature use color for cloud segmentation. @cite_4 showed that the ratio of red and blue channels from RGB color space is a good candidate for segmentation and tuned corresponding thresholds to create binary masks. @cite_18 exploited the difference of red and blue channels for successful detection and subsequent labeling of pixels. @cite_24 also used the difference of red and blue channels in their superpixel-based cloud segmentation framework. @cite_14 used the Saturation (S) channel for calculating cloud coverage. Mantelli- @cite_22 investigated the locus of cloud pixels in the RGB color model. @cite_21 proposed cloud detection using an adaptive threshold technique in the normalized blue red channel. @cite_25 proposed a cloud detection framework using superpixel classification of image features.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_22", "@cite_21", "@cite_24", "@cite_25" ], "mid": [ "", "2098554604", "2062583931", "", "2031640526", "2077095720", "1533693043" ], "abstract": [ "", "Abstract This work describes the development of a simple method of field estimating the sky cloud coverage percentage for several applications at the Brazilian Antarctic Station, Ferraz (62°05′S, 58°23.5′W). The database of this method was acquired by a digital color camera in the visible range of the spectrum. A new algorithm was developed to classify each pixel according to a criteria decision process. The information on the pixel contamination by clouds was obtained from the saturation component of the intensity, hue, and saturation space (IHS). For simplicity, the images were acquired with a limited field of view of 36° pointing to the camera’s zenith to prevent direct sunlight from reaching the internal charge-coupled device (CCD) on the camera. For a priori–classified clear-sky images, the accuracy of the method was superior to 94 . For overcast-sky conditions, the corresponding accuracy was larger than 99 . A comparison test was performed with two human observers and our method. The results for the...", "A discussion is presented of daytime sky imaging and techniques that may be applied to the analysis of full-color sky images to infer cloud macrophysical properties. Descriptions of two different types of skyimaging systems developed by the authors are presented, one of which has been developed into a commercially available instrument. Retrievals of fractional sky cover from automated processing methods are compared to human retrievals, both from direct observations and visual analyses of sky images. Although some uncertainty exists in fractional sky cover retrievals from sky images, this uncertainty is no greater than that attached to human observations for the commercially available sky-imager retrievals. Thus, the application of automatic digital image processing techniques on sky images is a useful method to complement, or even replace, traditional human observations of sky cover and, potentially, cloud type. Additionally, the possibilities for inferring other cloud parameters such as cloud brokenness and solar obstruction further enhance the usefulness of sky imagers.", "", "AbstractCloud detection is the precondition for deriving other information (e.g., cloud cover) in ground-based sky imager applications. This paper puts forward an effective cloud detection approach, the Hybrid Thresholding Algorithm (HYTA) that fully exploits the benefits of the combination of fixed and adaptive thresholding methods. First, HYTA transforms an input color cloud image into a normalized blue red channel ratio image that can keep a distinct contrast, even with noise and outliers. Then, HYTA identifies the ratio image as either unimodal or bimodal according to its standard deviation, and the unimodal and bimodal images are handled by fixed and minimum cross entropy (MCE) thresholding algorithms, respectively. The experimental results demonstrate that HYTA shows an accuracy of 88.53 , which is far higher than those of either fixed or MCE thresholding alone. Moreover, HYTA is also verified to outperform other state-of-the-art cloud detection approaches.", "Cloud detection plays an essential role in meteorological research and has received considerable attention in recent years. However, this issue is particularly challenging due to the diverse characteristics of clouds. In this letter, a novel algorithm based on superpixel segmentation (SPS) is proposed for cloud detection. In our proposed strategy, a series of superpixels could be obtained adaptively by SPS algorithm according to the characteristics of clouds. We first calculate a local threshold for each superpixel and then determine a threshold matrix for the whole image. Finally, cloud can be detected by comparing with the obtained threshold matrix. Experimental results show that our proposed algorithm achieves better performance than the current cloud detection algorithms.", "Automatic cloud extraction from satellite imagery is an important task for many applications in remote sensing. Humans can easily identify various clouds from satellite images based on the visual features of cloud. In this study, a method of automatic cloud detection is proposed based on object classification of image features. An image is first segmented into superpixels so that the descriptor of each superpixel can be computed to form a feature vector for classification. The support vector machine algorithm is then applied to discriminate cloud and noncloud regions. Thereafter, the GrabCut algorithm is used to extract more accurate cloud regions. The key of the method is to deal with the highly varying patterns of clouds. The bag-of-words (BOW) model is used to construct the compact feature vectors from densely extracted local features, such as dense scale-invariant feature transform (SIFT). The algorithm is tested using 101 RapidEye and 86 Landsat images with many cloud patterns. These images achieve 89.2 of precision, 87.8 of recall for RapidEye, 85.8 of precision, and 83.9 of recall for Landsat. The experiments show that the method is insensitive to the number of codewords in the codebook construction of the BOW." ] }
1606.03719
1983466667
Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.
The other problem, which emerges as a consequence of the variety of representations, is the absence of a standard suitable validation and evaluation procedure. In addition to previous examples, Zender @cite_16 generate a representation ranging from sensor-based maps to a conceptual abstraction, encoded in an OWL-DL ontology of an indoor office environment. However, except for individual modules, their experimental evaluation is mainly qualitative. Pronobis and Jensfelt @cite_7 , instead, represent a conceptual map as a probabilistic chain graph model and evaluate their method by comparing the robot belief to be in a certain location against the ground truth. Gunther @cite_10 perform a sort of semantic aided object classification based on an OWL-DL knowledge base. The evaluation is based on the rate of correctly classified objects. Finally, Handa @cite_8 propose a synthetic dataset, which could be eventually extended with semantic knowledge and used as a ground truth for comparing semantic mapping methods. However, even when noise is introduced, fictitious data never reflect a real world acquisition.
{ "cite_N": [ "@cite_16", "@cite_10", "@cite_7", "@cite_8" ], "mid": [ "2042452952", "2035238312", "2000321478", "2058535340" ], "abstract": [ "We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following different findings in spatial cognition, our model is composed of layers representing maps at different levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporates a linguistic framework that actively supports the map acquisition process, and which is used for situated dialogue. Finally, we discuss the capabilities of the integrated system.", "We present an approach to create a semantic map of an indoor environment, based on a series of 3D point clouds captured by a mobile robot using a Kinect camera. The proposed system reconstructs the surfaces in the point clouds, detects different types of furniture and estimates their poses. The result is a consistent mesh representation of the environment enriched by CAD models corresponding to the detected pieces of furniture. We evaluate our approach on two datasets totaling over 800 frames directly on each individual frame.", "This paper presents a probabilistic framework combining heterogeneous, uncertain, information such as object observations, shape, size, appearance of rooms and human input for semantic mapping. It abstracts multi-modal sensory information and integrates it with conceptual common-sense knowledge in a fully probabilistic fashion. It relies on the concept of spatial properties which make the semantic map more descriptive, and the system more scalable and better adapted for human interaction. A probabilistic graphical model, a chaingraph, is used to represent the conceptual information and perform spatial reasoning. Experimental results from online system tests in a large unstructured office environment highlight the system's ability to infer semantic room categories, predict existence of objects and values of other spatial properties as well as reason about unexplored space.", "We introduce the Imperial College London and National University of Ireland Maynooth (ICL-NUIM) dataset for the evaluation of visual odometry, 3D reconstruction and SLAM algorithms that typically use RGB-D data. We present a collection of handheld RGB-D camera sequences within synthetically generated environments. RGB-D sequences with perfect ground truth poses are provided as well as a ground truth surface model that enables a method of quantitatively evaluating the final map or surface reconstruction accuracy. Care has been taken to simulate typically observed real-world artefacts in the synthetic imagery by modelling sensor noise in both RGB and depth data. While this dataset is useful for the evaluation of visual odometry and SLAM trajectory estimation, our main focus is on providing a method to benchmark the surface reconstruction accuracy which to date has been missing in the RGB-D community despite the plethora of ground truth RGB-D datasets available." ] }
1606.03628
2432484786
We propose to learn multiple local Mahalanobis distance metrics to perform k-nearest neighbor (kNN) classification of temporal sequences. Temporal sequences are first aligned by dynamic time warping (DTW); given the alignment path, similarity between two sequences is measured by the DTW distance, which is computed as the accumulated distance between matched temporal point pairs along the alignment path. Traditionally, Euclidean metric is used for distance computation between matched pairs, which ignores the data regularities and might not be optimal for applications at hand. Here we propose to learn multiple Mahalanobis metrics, such that DTW distance becomes the sum of Mahalanobis distances. We adapt the large margin nearest neighbor (LMNN) framework to our case, and formulate multiple metric learning as a linear programming problem. Extensive sequence classification results show that our proposed multiple metrics learning approach is effective, insensitive to the preceding alignment qualities, and reaches the state-of-the-art performances on UCR time series datasets.
Time series shapelet is introduced in @cite_16 , and it is a time series subsequence (patterns) which is discriminative of class-membership. The authors propose to enumerate all possible candidate subsequences, evaluate their qualities using information gain, and build a decision tree classifier out of the top ranked shapelets. Mining shapelets in their case is to search for more important subsequences, while disregarding less important subsequences. In the vision community, there are several related works @cite_13 @cite_6 @cite_20 , all of which are devoted to discovering mid-level visual patches from images. Mid-level visual patch is conceptually similar to shapelet in time series, and it a image patch which is both representative and discriminative for scene categories. They @cite_13 @cite_6 pose the discriminative patch search procedure as a discriminative clustering process, in which they selectively choose important patches but discarding other common patches. We are different from above work in that, we never have to greedily select important subsequences, instead, we take all subsequences into account and automatically learn their importance through metric learning.
{ "cite_N": [ "@cite_20", "@cite_16", "@cite_13", "@cite_6" ], "mid": [ "2115628259", "2029438113", "1590510366", "2055132753" ], "abstract": [ "Recent work on mid-level visual representations aims to capture information at the level of complexity higher than typical \"visual words\", but lower than full-blown semantic objects. Several approaches [5,6,12,23] have been proposed to discover mid-level visual elements, that are both 1) representative, i.e., frequently occurring within a visual dataset, and 2) visually discriminative. However, the current approaches are rather ad hoc and difficult to analyze and evaluate. In this work, we pose visual element discovery as discriminative mode seeking, drawing connections to the the well-known and well-studied mean-shift algorithm [2, 1, 4, 8]. Given a weakly-labeled image collection, our method discovers visually-coherent patch clusters that are maximally discriminative with respect to the labels. One advantage of our formulation is that it requires only a single pass through the data. We also propose the Purity-Coverage plot as a principled way of experimentally analyzing and evaluating different visual discovery approaches, and compare our method against prior work on the Paris Street View dataset of [5]. We also evaluate our method on the task of scene classification, demonstrating state-of-the-art performance on the MIT Scene-67 dataset.", "Classification of time series has been attracting great interest over the past decade. Recent empirical evidence has strongly suggested that the simple nearest neighbor algorithm is very difficult to beat for most time series problems. While this may be considered good news, given the simplicity of implementing the nearest neighbor algorithm, there are some negative consequences of this. First, the nearest neighbor algorithm requires storing and searching the entire dataset, resulting in a time and space complexity that limits its applicability, especially on resource-limited sensors. Second, beyond mere classification accuracy, we often wish to gain some insight into the data. In this work we introduce a new time series primitive, time series shapelets, which addresses these limitations. Informally, shapelets are time series subsequences which are in some sense maximally representative of a class. As we shall show with extensive empirical evaluations in diverse domains, algorithms based on the time series shapelet primitives can be interpretable, more accurate and significantly faster than state-of-the-art classifiers.", "The goal of this paper is to discover a set of discriminative patches which can serve as a fully unsupervised mid-level visual representation. The desired patches need to satisfy two requirements: 1) to be representative, they need to occur frequently enough in the visual world; 2) to be discriminative, they need to be different enough from the rest of the visual world. The patches could correspond to parts, objects, \"visual phrases\", etc. but are not restricted to be any one of them. We pose this as an unsupervised discriminative clustering problem on a huge dataset of image patches. We use an iterative procedure which alternates between clustering and training discriminative classifiers, while applying careful cross-validation at each step to prevent overfitting. The paper experimentally demonstrates the effectiveness of discriminative patches as an unsupervised mid-level visual representation, suggesting that it could be used in place of visual words for many tasks. Furthermore, discriminative patches can also be used in a supervised regime, such as scene classification, where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset.", "Given a large repository of geotagged imagery, we seek to automatically find visual elements, e. g. windows, balconies, and street signs, that are most distinctive for a certain geo-spatial area, for example the city of Paris. This is a tremendously difficult task as the visual features distinguishing architectural elements of different places can be very subtle. In addition, we face a hard search problem: given all possible patches in all images, which of them are both frequently occurring and geographically informative? To address these issues, we propose to use a discriminative clustering approach able to take into account the weak geographic supervision. We show that geographically representative image elements can be discovered automatically from Google Street View imagery in a discriminative manner. We demonstrate that these elements are visually interpretable and perceptually geo-informative. The discovered visual elements can also support a variety of computational geography tasks, such as mapping architectural correspondences and influences within and across cities, finding representative elements at different geo-spatial scales, and geographically-informed image retrieval." ] }
1606.03628
2432484786
We propose to learn multiple local Mahalanobis distance metrics to perform k-nearest neighbor (kNN) classification of temporal sequences. Temporal sequences are first aligned by dynamic time warping (DTW); given the alignment path, similarity between two sequences is measured by the DTW distance, which is computed as the accumulated distance between matched temporal point pairs along the alignment path. Traditionally, Euclidean metric is used for distance computation between matched pairs, which ignores the data regularities and might not be optimal for applications at hand. Here we propose to learn multiple Mahalanobis metrics, such that DTW distance becomes the sum of Mahalanobis distances. We adapt the large margin nearest neighbor (LMNN) framework to our case, and formulate multiple metric learning as a linear programming problem. Extensive sequence classification results show that our proposed multiple metrics learning approach is effective, insensitive to the preceding alignment qualities, and reaches the state-of-the-art performances on UCR time series datasets.
Our work is most similar to and largely inspired by LMNN @cite_17 . In @cite_17 , Weinbergre and Saul extend LMNN to learn multiple local distance metrics, which is exploited in our work as well. However, we are still sufficiently different: first the labeled examples in our case are temporal sequences; second, the DTW distance between two examples is jointly defined by multiple metrics, while in @cite_17 , distance between two examples are determined by a single metric. In @cite_4 , propose to learn a Mahalanobis distance metric to perform DTW sequence alignment. First they need ground-truth alignments, which is not required in our case, and second they focus on alignment, instead of kNN classification.
{ "cite_N": [ "@cite_4", "@cite_17" ], "mid": [ "2952103052", "2106053110" ], "abstract": [ "In this paper, we propose to learn a Mahalanobis distance to perform alignment of multivariate time series. The learning examples for this task are time series for which the true alignment is known. We cast the alignment problem as a structured prediction task, and propose realistic losses between alignments for which the optimization is tractable. We provide experiments on real data in the audio to audio context, where we show that the learning of a similarity measure leads to improvements in the performance of the alignment task. We also propose to use this metric learning framework to perform feature selection and, from basic audio features, build a combination of these with better performance for the alignment.", "The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner." ] }
1606.03662
2422717986
Choosing a good location when opening a new store is crucial for the future success of a business. Traditional methods include offline manual survey, which is very time consuming, and analytic models based on census data, which are un- able to adapt to the dynamic market. The rapid increase of the availability of big data from various types of mobile devices, such as online query data and offline positioning data, provides us with the possibility to develop automatic and accurate data-driven prediction models for business store placement. In this paper, we propose a Demand Distribution Driven Store Placement (D3SP) framework for business store placement by mining search query data from Baidu Maps. D3SP first detects the spatial-temporal distributions of customer demands on different business services via query data from Baidu Maps, the largest online map search engine in China, and detects the gaps between demand and sup- ply. Then we determine candidate locations via clustering such gaps. In the final stage, we solve the location optimization problem by predicting and ranking the number of customers. We not only deploy supervised regression models to predict the number of customers, but also learn to rank models to directly rank the locations. We evaluate our framework on various types of businesses in real-world cases, and the experiments results demonstrate the effectiveness of our methods. D3SP as the core function for store placement has already been implemented as a core component of our business analytics platform and could be potentially used by chain store merchants on Baidu Nuomi.
Location based services have been widely used in the analysis of trade and location placement @cite_6 @cite_7 . @cite_6 find optimal retail store location from a list of locations by using supervised learning with features mined from Foursquare check-in data. Researchers in @cite_23 focus on locating the ambulance stations by using the real traffic information so as to minimize the average travel-time to reach the emergency requests. The authors of @cite_24 illustrate how User Generated Mobile Location Data like Foursquare check-ins can be used in trade area analysis. @cite_25 exploits regression modeling, pairwise ranking objective and sparsity regularization to solve the real estate ranking problem with online user reviews and offline moving behaviors. Also authors in @cite_14 propose a method for estate appraisal by leveraging the mutual enforcement of ranking and clustering power.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_6", "@cite_24", "@cite_23", "@cite_25" ], "mid": [ "2139846410", "", "2024577186", "15879234", "2295787074", "" ], "abstract": [ "It is traditionally a challenge for home buyers to understand, compare and contrast the investment values of real estates. While a number of estate appraisal methods have been developed to value real property, the performances of these methods have been limited by the traditional data sources for estate appraisal. However, with the development of new ways of collecting estate-related mobile data, there is a potential to leverage geographic dependencies of estates for enhancing estate appraisal. Indeed, the geographic dependencies of the value of an estate can be from the characteristics of its own neighborhood (individual), the values of its nearby estates (peer), and the prosperity of the affiliated latent business area (zone). To this end, in this paper, we propose a geographic method, named ClusRanking, for estate appraisal by leveraging the mutual enforcement of ranking and clustering power. ClusRanking is able to exploit geographic individual, peer, and zone dependencies in a probabilistic ranking model. Specifically, we first extract the geographic utility of estates from geography data, estimate the neighborhood popularity of estates by mining taxicab trajectory data, and model the influence of latent business areas via ClusRanking. Also, we use a linear model to fuse these three influential factors and predict estate investment values. Moreover, we simultaneously consider individual, peer and zone dependencies, and derive an estate-specific ranking likelihood as the objective function. Finally, we conduct a comprehensive evaluation with real-world estate related data, and the experimental results demonstrate the effectiveness of our method.", "", "The problem of identifying the optimal location for a new retail store has been the focus of past research, especially in the field of land economy, due to its importance in the success of a business. Traditional approaches to the problem have factored in demographics, revenue and aggregated human flow statistics from nearby or remote areas. However, the acquisition of relevant data is usually expensive. With the growth of location-based social networks, fine grained data describing user mobility and popularity of places has recently become attainable. In this paper we study the predictive power of various machine learning features on the popularity of retail stores in the city through the use of a dataset collected from Foursquare in New York. The features we mine are based on two general signals: geographic, where features are formulated according to the types and density of nearby places, and user mobility, which includes transitions between venues or the incoming flow of mobile users from distant areas. Our evaluation suggests that the best performing features are common across the three different commercial chains considered in the analysis, although variations may exist too, as explained by heterogeneities in the way retail facilities attract users. We also show that performance improves significantly when combining multiple features in supervised learning algorithms, suggesting that the retail success of a business may depend on multiple factors.", "In this paper, we illustrate how User Generated Mobile Location Data (UGMLD) like Foursquare check-ins can be used in Trade Area Analysis (TAA) by introducing a new framework and corresponding analytic methods. Three key processes were created: identifying the activity center of a mobile user, profiling users based on their location history, and modeling users' preference probability. Extensions to traditional TAA are introduced, including customer-centric distance decay analysis and check-in sequence analysis. Adopting the rich content and context of UGMLD, these methods introduce new dimensions to modeling and delineating trade areas. Analyzing customers' visits to a business in the context of their daily life sheds new light on the nature and performance of the venue. This work has important business implications in the field of mobile computing.", "Emergency medical service provides a variety of services for those in need of emergency care. One of the major challenges encountered by emergency service providers is selecting the appropriate locations for ambulance stations. Prior works measure spatial proximity under Euclidean space or static road network. In this paper, we focus on locating the ambulance stations by using the real traffic information so as to minimize the average travel-time to reach the emergency requests. To this end, we estimate the travel-time of road segments using real GPS trajectories and propose an efficient PAM-based refinement for the location problem. We conduct extensive experimental evaluations using real emergency requests collected from Tianjin, and the result shows that the proposed solution can reduce the travel-time to reach the emergency requests by 29.9 when compared to the original locations of ambulance stations.", "" ] }
1606.02792
2408795146
Optical strain is an extension of optical flow that is capable of quantifying subtle changes on faces and representing the minute facial motion intensities at the pixel level. This is computationally essential for the relatively new field of spontaneous micro-expression, where subtle expressions can be technically challenging to pinpoint. In this paper, we present a novel method for detecting and recognizing micro-expressions by utilizing facial optical strain magnitudes to construct optical strain features and optical strain weighted features. The two sets of features are then concatenated to form the resultant feature histogram. Experiments were performed on the CASME II and SMIC databases. We demonstrate on both databases, the usefulness of optical strain information and more importantly, that our best approaches are able to outperform the original baseline results for both detection and recognition tasks. A comparison of the proposed method with other existing spatio-temporal feature extraction approaches is also presented. HighlightsThe method proposed is a combination of two optical strain derived features.Optical strain magnitudes were employed to describe fine subtle facial movements.Evaluation was performed in both the detection and recognition tasks.Promising performances were obtained in two micro-expression databases.
In the paper by @cite_4 , optical strain pattern was used for spotting facial micro-expressions automatically. They achieved 100 , the dataset used possess a small sample size, and contains a total of only 7 micro-expressions. Besides, the micro-expressions detected were not spontaneous but rather posed ones, which are less natural and realistic.
{ "cite_N": [ "@cite_4" ], "mid": [ "2113977317" ], "abstract": [ "This paper presents a novel method for automatic spotting (temporal segmentation) of facial expressions in long videos comprising of continuous and changing expressions. The method utilizes the strain impacted on the facial skin due to the non-rigid motion caused during expressions. The strain magnitude is calculated using the central difference method over the robust and dense optical flow field of each subjects face. Testing has been done on 2 datasets (which includes 100 macro-expressions) and promising results have been obtained. The method is robust to several common drawbacks found in automatic facial expression segmentation including moderate in-plane and out-of-plane motion. Additionally, the method has also been modified to work with videos containing micro-expressions. Micro-expressions are detected utilizing their smaller spatial and temporal extent. A subject's face is divided in to sub-regions (mouth, cheeks, forehead, and eyes) and facial strain is calculated for each of these regions. Strain patterns in individual regions are used to identify subtle changes which facilitate the detection of micro-expressions." ] }
1606.02877
2415195063
Understanding user instructions in natural language is an active research topic in AI and robotics. Typically, natural user instructions are high-level and can be reduced into low-level tasks expressed in common verbs (e.g., take', get', put'). For robots understanding such instructions, one of the key challenges is to process high-level user instructions and achieve the specified tasks with robots' primitive actions. To address this, we propose novel algorithms by utilizing semantic roles of common verbs defined in semantic dictionaries and integrating multiple open knowledge to generate task plans. Specifically, we present a new method for matching and recovering semantics of user instructions and a novel task planner that exploits functional knowledge of robot's action model. To verify and evaluate our approach, we implemented a prototype system using knowledge from several open resources. Experiments on our system confirmed the correctness and efficiency of our algorithms. Notably, our system has been deployed in the KeJia robot, which participated the annual RoboCup@Home competitions in the past three years and achieved encouragingly high scores in the benchmark tests.
To date, many approaches on instruction understanding and task planning for service robots have been proposed in the literature. For instance, several integrated systems @cite_21 @cite_13 @cite_11 for natural language understanding have been introduced to enable robots to complete tasks given instructions in natural language. However, they all assume that instructions are definitely specified for the domains and do not consider semantic disambiguation of verbs and their roles. Work have been proposed to manually create environment-driven instructions for grounding user instructions in natural language to robots' actions @cite_6 @cite_2 . However, these methods cannot scale to large number of tasks because each task need to be manually specified in an environment, and are not suitable for different types of robots (e.g., robots with different arm configurations).
{ "cite_N": [ "@cite_21", "@cite_6", "@cite_2", "@cite_13", "@cite_11" ], "mid": [ "2099217059", "2296135247", "2029806546", "2060304006", "2151958719" ], "abstract": [ "Robots that can be given instructions in spoken language need to be able to parse a natural language utterance quickly, determine its meaning, generate a goal representation from it, check whether the new goal conflicts with existing goals, and if acceptable, produce an action sequence to achieve the new goal (ideally being sensitive to the existing goals). In this paper, we describe an integrated robotic architecture that can achieve the above steps by translating natural language instructions incrementally and simultaneously into formal logical goal description and action languages, which can be used both to reason about the achievability of a goal as well as to generate new action scripts to pursue the goal. We demonstrate the implementation of our approach on a robot taking spoken natural language instructions in an office environment.", "", "We describe a semantic mapping algorithm that learns human-centric environment models by interpreting natural language utterances. Underlying the approach is a coupled metric, topological, and semantic representation of the environment that enables the method to fuse information from natural language descriptions with low-level metric and appearance data. We extend earlier work with a novel formulation that incorporates spatial layout into a topological representation of the environment. We also describe a factor graph formulation of the semantic properties that encodes human-centric concepts such as type and colloquial name for each mapped region. The algorithm infers these properties by combining the user’s natural language descriptions with imageand laser-based scene classification. We also propose a mechanism to more effectively ground natural language descriptions of distant regions using semantic cues from other modalities. We describe how the algorithm employs this learned semantic information to propose valid topological hypotheses, leading to more accurate topological and metric maps. We demonstrate that integrating language with other sensor data increases the accuracy of the achieved spatial-semantic representation of the environment.", "Natural human-robot interaction requires different and more robust models of language understanding (NLU) than non-embodied NLU systems. In particular, architectures are required that (1) process language incrementally in order to be able to provide early backchannel feedback to human speakers; (2) use pragmatic contexts throughout the understanding process to infer missing information; and (3) handle the underspecified, fragmentary, or otherwise ungrammatical utterances that are common in spontaneous speech. In this paper, we describe our attempts at developing an integrated natural language understanding architecture for HRI, and demonstrate its novel capabilities using challenging data collected in human-human interaction experiments.", "This paper provides a framework to automatically generate a hybrid controller that guarantees that the robot can achieve its task when a robot model, a class of admissible environments, and a high-level task or behavior for the robot are provided. The desired task specifications, which are expressed in a fragment of linear temporal logic (LTL), can capture complex robot behaviors such as search and rescue, coverage, and collision avoidance. In addition, our framework explicitly captures sensor specifications that depend on the environment with which the robot is interacting, which results in a novel paradigm for sensor-based temporal-logic-motion planning. As one robot is part of the environment of another robot, our sensor-based framework very naturally captures multirobot specifications in a decentralized manner. Our computational approach is based on first creating discrete controllers satisfying specific LTL formulas. If feasible, the discrete controller is then used to guide the sensor-based composition of continuous controllers, which results in a hybrid controller satisfying the high-level specification but only if the environment is admissible." ] }
1606.02877
2415195063
Understanding user instructions in natural language is an active research topic in AI and robotics. Typically, natural user instructions are high-level and can be reduced into low-level tasks expressed in common verbs (e.g., take', get', put'). For robots understanding such instructions, one of the key challenges is to process high-level user instructions and achieve the specified tasks with robots' primitive actions. To address this, we propose novel algorithms by utilizing semantic roles of common verbs defined in semantic dictionaries and integrating multiple open knowledge to generate task plans. Specifically, we present a new method for matching and recovering semantics of user instructions and a novel task planner that exploits functional knowledge of robot's action model. To verify and evaluate our approach, we implemented a prototype system using knowledge from several open resources. Experiments on our system confirmed the correctness and efficiency of our algorithms. Notably, our system has been deployed in the KeJia robot, which participated the annual RoboCup@Home competitions in the past three years and achieved encouragingly high scores in the benchmark tests.
To improve generality and scalability, researchers have tried to exploit online knowledge and learn large-scare knowledge representations to build a general-purpose system for instruction understanding. For example, Lemaignan @cite_28 @cite_19 have tried to understand and reason about knowledge around an action model using online knowledge for robots. It is worth pointing out that we previously proposed an integrated system @cite_24 for our KeJia robot consisting of multi-mode NLP, integrated decision-making, and open knowledge searching.
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_24" ], "mid": [ "", "2090118499", "2396580185" ], "abstract": [ "", "This paper presents how extraction, representation and use of symbolic knowledge from real-world perception and human-robot verbal and non-verbal interaction can actually enable a grounded and shared model of the world that is suitable for later high-level tasks such as dialogue understanding. We show how the anchoring process itself relies on the situated nature of human-robot interactions. We present an integrated approach, including a specialized symbolic knowledge representation system based on Description Logics, and case studies on several robotic platforms that demonstrate these cognitive capabilities.", "Users may ask a service robot to accomplish various tasks so that the designer of the robot cannot program each of the tasks beforehand. As more and more open-source knowledge resources become available, it is worthwhile trying to make use of open-source knowledge resources for service robots. The challenge lies in the autonomous identification, acquisition and utilization of missing knowledge about a user task at hand. In this paper, the core problem is formalized and the complexity results of the main reasoning issues are provided. A mechanism for task planning with open-knowledge rules which are provided by non-experts in semi-structured natural language and thus generally underspecified are introduced. Techniques for translating the semi-structured knowledge from a large open-source knowledge base are also presented. Experiments showed a remarkable improvement of the system performance on a test set consisting of hundreds of user desires from the open-source knowledge base." ] }
1606.02877
2415195063
Understanding user instructions in natural language is an active research topic in AI and robotics. Typically, natural user instructions are high-level and can be reduced into low-level tasks expressed in common verbs (e.g., take', get', put'). For robots understanding such instructions, one of the key challenges is to process high-level user instructions and achieve the specified tasks with robots' primitive actions. To address this, we propose novel algorithms by utilizing semantic roles of common verbs defined in semantic dictionaries and integrating multiple open knowledge to generate task plans. Specifically, we present a new method for matching and recovering semantics of user instructions and a novel task planner that exploits functional knowledge of robot's action model. To verify and evaluate our approach, we implemented a prototype system using knowledge from several open resources. Experiments on our system confirmed the correctness and efficiency of our algorithms. Notably, our system has been deployed in the KeJia robot, which participated the annual RoboCup@Home competitions in the past three years and achieved encouragingly high scores in the benchmark tests.
The approaches that are most related to ours are the ones using OMICS for robots to complete household tasks. The first attempt to utilize OMICS to accomplish a household task is @cite_26 , which proposed a generative model based on the Markov chain techniques. Later on, @cite_17 @cite_9 @cite_27 presented a system called KNOWROB for processing knowledge in order to achieve more flexible and general behavior. Most recently, we proposed a formal description of knowledge gaps between user instructions and local knowledge in robotic system for instruction understanding @cite_25 @cite_24 @cite_16 @cite_10 . However, in these efforts using OMICS for robot task planning with user instructions, common verbs are normally not defined in the knowledge base, which limits their performance on utilizing existing open knowledge. Thus, our work is proposed to address the weakness of state-of-the-art methods.
{ "cite_N": [ "@cite_26", "@cite_9", "@cite_24", "@cite_27", "@cite_16", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "2285097965", "2114653216", "2396580185", "2017197542", "2082543769", "2186444419", "2156415207", "" ], "abstract": [ "A system and a method are disclosed that provide plans for autonomous machines such as humanoid robots to perform indoor task. Human subjects contribute plans to a knowledge database. Information in the knowledge database is pre-processed to identify task steps and characterize them as action-object pairs, from which a plan database is created. A discriminative technique uses hierarchical agglomerative clustering to select an existing plan from the plan database. A generative technique formulates new plans from the plan database using first-order Markov chains, and may take into account information about the operational environment. Experimentation and evaluation by human subjects confirm the efficacy of both techniques.", "Unlike people, household robots cannot rely on commonsense knowledge when accomplishing everyday tasks. We believe that this is one of the reasons why they perform poorly in comparison to humans. By integrating extensive collections of commonsense knowledge into mobile robot's knowledge bases, the work proposed in this paper enables robots to flexibly infer control decisions under changing environmental conditions. We present a system that converts commonsense knowledge from the large Open Mind Indoor Common Sense database from natural language into a Description Logic representation that allows for automated reasoning and for relating it to other sources of knowledge.", "Users may ask a service robot to accomplish various tasks so that the designer of the robot cannot program each of the tasks beforehand. As more and more open-source knowledge resources become available, it is worthwhile trying to make use of open-source knowledge resources for service robots. The challenge lies in the autonomous identification, acquisition and utilization of missing knowledge about a user task at hand. In this paper, the core problem is formalized and the complexity results of the main reasoning issues are provided. A mechanism for task planning with open-knowledge rules which are provided by non-experts in semi-structured natural language and thus generally underspecified are introduced. Techniques for translating the semi-structured knowledge from a large open-source knowledge base are also presented. Experiments showed a remarkable improvement of the system performance on a test set consisting of hundreds of user desires from the open-source knowledge base.", "Autonomous service robots will have to understand vaguely described tasks, such as “set the table” or “clean up”. Performing such tasks as intended requires robots to fully, precisely, and appropriately parameterize their low-level control programs. We propose knowledge processing as a computational resource for enabling robots to bridge the gap between vague task descriptions and the detailed information needed to actually perform those tasks in the intended way. In this article, we introduce the KnowRob knowledge processing system that is specifically designed to provide autonomous robots with the knowledge needed for performing everyday manipulation tasks. The system allows the realization of “virtual knowledge bases”: collections of knowledge pieces that are not explicitly represented but computed on demand from the robot's internal data structures, its perception system, or external sources of information. This article gives an overview of the different kinds of knowledge, the different inference mechanisms, and interfaces for acquiring knowledge from external sources, such as the robot's perception system, observations of human activities, Web sites on the Internet, as well as Web-based knowledge bases for information exchange between robots. We evaluate the system's scalability and present different integrated experiments that show its versatility and comprehensiveness.", "Correctly interpreting human instructions is the first step to human-robot interaction. Previous approaches to semantically parsing the instructions relied on large numbers of training examples with annotation to widely cover all words in a domain. Annotating large enough instructions with semantic forms needs exhaustive engineering efforts. Hence, we propose propagating the semantic lexicon to learn a semantic parser from limited annotations, whereas the parser still has the ability of interpreting instructions on a large scale. We assume that the semantically-close words have the same semantic form based on the fact that human usually uses different words to refer to a same object or task. Our approach softly maps the unobserved words phrases to the semantic forms learned from the annotated copurs through a metric for knowledge-based lexical similarity. Experiments on the collected instructions showed that the semantic parser learned with lexicon propagation outperformed the baseline. Our approach provides an opportunity for the robots to understand the human instructions on a large scale.", "As more and more open knowledge resources become available, it is interesting to explore opportunities of enhancing autonomous agents' capacities by utilizing the knowledge in these resources, instead of hand-coding knowledge for agents. A major challenge towards this goal lies in the translation of the open knowledge organized in multiple modes, unstructured or semi-structured, into the internal representations of agents. In this paper we present a set of multi-mode NLP techniques to formalize the open knowledge for autonomous agents. Two case studies are reported in which our robot, equipped with the multi-mode NLP techniques, succeeded in acquiring knowledge from the microwave oven manual and from the open knowledge database, OMICS, and solving problems that could not be solved before the robot acquired the knowledge. Experiments for evaluating the performance of our approach show that our approach is promising.", "This paper presents an effort to enable robots to utilize open-source knowledge resources autonomously for human-robot interaction. The main challenges include how to extract knowledge in semi-structured and unstructured natural languages, how to make use of multiple types of knowledge in decision making, and how to identify the knowledge that is missing. A set of techniques for multi-mode natural language processing, integrated decision making, and open knowledge searching is proposed. The OK-KeJia robot prototype is implemented and evaluated, with special attention to two tests on 11,615 user tasks and 467 user desires. The experiments show that the overall performance improves remarkably due to the use of appropriate open knowledge.", "" ] }
1606.02270
2416043263
We present the EpiReader, a novel model for machine comprehension of text. Machine comprehension of unstructured, real-world text is a major research goal for natural language processing. Current tests of machine comprehension pose questions whose answers can be inferred from some supporting text, and evaluate a model's response to the questions. The EpiReader is an end-to-end neural model comprising two components: the first component proposes a small set of candidate answers after comparing a question to its supporting text, and the second component formulates hypotheses using the proposed candidates and the question, then reranks the hypotheses based on their estimated concordance with the supporting text. We present experiments demonstrating that the EpiReader sets a new state-of-the-art on the CNN and Children's Book Test machine comprehension benchmarks, outperforming previous neural models by a significant margin.
Most of the details for the very recent AS Reader are provided in the description of our Extractor module in , so we do not summarize it further here. This model @cite_3 set the previous state-of-the-art on the CBT dataset.
{ "cite_N": [ "@cite_3" ], "mid": [ "2288995089" ], "abstract": [ "Several large cloze-style context-question-answer datasets have been introduced recently: the CNN and Daily Mail news data and the Children's Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Ensemble of our models sets new state of the art on all evaluated datasets." ] }
1606.02270
2416043263
We present the EpiReader, a novel model for machine comprehension of text. Machine comprehension of unstructured, real-world text is a major research goal for natural language processing. Current tests of machine comprehension pose questions whose answers can be inferred from some supporting text, and evaluate a model's response to the questions. The EpiReader is an end-to-end neural model comprising two components: the first component proposes a small set of candidate answers after comparing a question to its supporting text, and the second component formulates hypotheses using the proposed candidates and the question, then reranks the hypotheses based on their estimated concordance with the supporting text. We present experiments demonstrating that the EpiReader sets a new state-of-the-art on the CNN and Children's Book Test machine comprehension benchmarks, outperforming previous neural models by a significant margin.
During the write-up of this paper, another very recent model came to our attention. propose using a bilinear term instead of a @math layer to compute the attention between question and passage words, and also uses the attended word encodings for direct, pointer-style prediction as in . This model set the previous state-of-the-art on the CNN dataset. However, this model used embedding vectors pretrained on a large external corpus @cite_4 .
{ "cite_N": [ "@cite_4" ], "mid": [ "2250539671" ], "abstract": [ "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition." ] }
1606.02270
2416043263
We present the EpiReader, a novel model for machine comprehension of text. Machine comprehension of unstructured, real-world text is a major research goal for natural language processing. Current tests of machine comprehension pose questions whose answers can be inferred from some supporting text, and evaluate a model's response to the questions. The EpiReader is an end-to-end neural model comprising two components: the first component proposes a small set of candidate answers after comparing a question to its supporting text, and the second component formulates hypotheses using the proposed candidates and the question, then reranks the hypotheses based on their estimated concordance with the supporting text. We present experiments demonstrating that the EpiReader sets a new state-of-the-art on the CNN and Children's Book Test machine comprehension benchmarks, outperforming previous neural models by a significant margin.
The EpiReader borrows ideas from other models as well. The Reasoner's convolutional architecture is based on and . Our use of word-level matching was inspired by the Parallel-Hierarchical model of and the natural language inference model of . Finally, the idea of formulating and testing hypotheses for question-answering was used to great effect in IBM's DeepQA system for Jeopardy! @cite_5 , although that was a more traditional information retrieval pipeline rather than an end-to-end neural model.
{ "cite_N": [ "@cite_5" ], "mid": [ "2171278097" ], "abstract": [ "IBM Research undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV Quiz show, Jeopardy! The extent of the challenge includes fielding a real-time automatic contestant on the show, not merely a laboratory exercise. The Jeopardy! Challenge helped us address requirements that led to the design of the DeepQA architecture and the implementation of Watson. After 3 years of intense research and development by a core team of about 20 researches, Watson is performing at human expert-levels in terms of precision, confidence and speed at the Jeopardy! Quiz show. Our results strongly suggest that DeepQA is an effective and extensible architecture that may be used as a foundation for combining, deploying, evaluating and advancing a wide range of algorithmic techniques to rapidly advance the field of QA." ] }
1606.02577
2411816729
We give a precise algebraic characterization of the power of Sherali--Adams relaxations for solvability of valued constraint satisfaction problems (CSPs) to optimality. The condition is that of bounded width, which has already been shown to capture the power of local consistency methods for decision CSPs and the power of semidefinite programming for robust approximation of CSPs. Our characterization has several algorithmic and complexity consequences. On the algorithmic side, we show that several novel and well-known valued constraint languages are tractable via the third level of the Sherali--Adams relaxation. For the known languages, this is a significantly simpler algorithm than those previously obtained. On the complexity side, we obtain a dichotomy theorem for valued constraint languages that can express an injective unary function. This implies a simple proof of the dichotomy theorem for conservative valued constraint languages established by Kolmogorov and Živný [J. ACM, 60 (2013), 10], and also a ...
There are valued constraint languages that have valued relational width @math but not @math . For example, languages improved by a tournament pair fractional polymorphism @cite_40 , discussed in detail in Example in , have valued relational width @math by the results in this paper, but do not have valued relational width in @math as shown [Example 5] ktz15:sicomp using Theorem .
{ "cite_N": [ "@cite_40" ], "mid": [ "2090795232" ], "abstract": [ "The submodular function minimization problem (SFM) is a fundamental problem in combinatorial optimization and several fully combinatorial polynomial-time algorithms have recently been discovered to solve this problem. The most general versions of these algorithms are able to minimize any submodular function whose domain is a set of tuples over any totally-ordered finite set and whose range includes both finite and infinite values. In this paper we demonstrate that this general form of SFM is just one example of a much larger class of tractable discrete optimization problems defined by valued constraints. These tractable problems are characterized by the fact that their valued constraints have an algebraic property which we call a tournament pair multimorphism. This larger tractable class also includes the problem of satisfying a set of Horn clauses (Horn-SAT), as well as various extensions of this problem to larger finite domains." ] }
1606.02577
2411816729
We give a precise algebraic characterization of the power of Sherali--Adams relaxations for solvability of valued constraint satisfaction problems (CSPs) to optimality. The condition is that of bounded width, which has already been shown to capture the power of local consistency methods for decision CSPs and the power of semidefinite programming for robust approximation of CSPs. Our characterization has several algorithmic and complexity consequences. On the algorithmic side, we show that several novel and well-known valued constraint languages are tractable via the third level of the Sherali--Adams relaxation. For the known languages, this is a significantly simpler algorithm than those previously obtained. On the complexity side, we obtain a dichotomy theorem for valued constraint languages that can express an injective unary function. This implies a simple proof of the dichotomy theorem for conservative valued constraint languages established by Kolmogorov and Živný [J. ACM, 60 (2013), 10], and also a ...
It could be that either SA @math and SA @math , or SA @math and SA @math have the same power. The former happens in case of relational width. Dalmau proved that if a crisp language has relational width @math then it has relational width @math @cite_52 . Together with Theorem and the analogue of Proposition for relational width established in @cite_55 , this gives a trichotomy for relational width.
{ "cite_N": [ "@cite_55", "@cite_52" ], "mid": [ "2150339067", "2123895606" ], "abstract": [ "This paper starts with the project of finding a large subclass of NP which exhibits a dichotomy. The approach is to find this subclass via syntactic prescriptions. While the paper does not achieve this goal, it does isolate a class (of problems specified by) \"monotone monadic SNP without inequality\" which may exhibit this dichotomy. We justify the placing of all these restrictions by showing, essentially using Ladner's theorem, that classes obtained by using only two of the above three restrictions do not show this dichotomy. We then explore the structure of this class. We show that all problems in this class reduce to the seemingly simpler class CSP. We divide CSP into subclasses and try to unify the collection of all known polytime algorithms for CSP problems and extract properties that make CSP problems NP-hard. This is where the second part of the title, \"a study through Datalog and group theory,\" comes in. We present conjectures about this class which would end in showing the dichotomy.", "In this note, we show that every constraint satisfaction problem that has relational width 2 has also relational width 1. This is achieved by means of an obstruction-like characterization of relational width which we believe to be of independent interest." ] }
1606.02193
2409923045
Monitoring Wireless Sensor Networks (WSNs) are composed of sensor nodes that report temperature, relative humidity, and other environmental parameters. The time between two successive measurements is a critical parameter to set during the WSN configuration because it can impact the WSN's lifetime, the wireless medium contention and the quality of the reported data. As trends in monitored parameters can significantly vary between scenarios and within time, identifying a sampling interval suitable for several cases is also challenging. In this work, we propose a dynamic sampling rate adaptation scheme based on reinforcement learning, able to tune sensors' sampling interval on-the-fly, according to environmental conditions and application requirements. The primary goal is to set the sampling interval to the best value possible so as to avoid oversampling and save energy, while not missing environmental changes that can be relevant for the application. In simulations, our mechanism could reduce up to 73 the total number of transmissions compared to a fixed strategy and, simultaneously, keep the average quality of information provided by the WSN. The inherent flexibility of the reinforcement learning algorithm facilitates its use in several scenarios, so as to exploit the broad scope of the Internet of Things.
Several works have already adopted reinforcement learning techniques at various layers to improve wireless networks' performance. @cite_3 , the authors proposed a self-adaptive routing framework for wireless mesh networks. Using the Q-Learning algorithm, it was possible, in runtime, to select the most proper routing protocol from a pre-defined set of options and successfully increase the average data throughput in comparison to static techniques. Other examples of the use of reinforcement learning techniques in sensor networks include locating mobile sensor nodes @cite_5 and aggregating sensed data @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_3" ], "mid": [ "", "2106813267", "2046164077" ], "abstract": [ "", "Wireless sensor networks (WSN) are extensively applied in civil and military areas. Localization is an essential prerequisite for many WSN applications, and is often based on beacons that provide geographical information in real time. Mobile Beacons (MB) can be used to replace many static beacons with paths that can be controlled in real-time. Robotic and or flight vehicles can work as MBs. In this paper we consider the use of reinforcement learning (RL) (a significant branch of machine learning) to control MBs. Usually, RL needs an infinite series of episodes to determine an optimal policy. We propose however a method of localization employing mobile beacon whose behavior will be controlled by an adapted RL algorithm. A MB learns and makes decisions based on weighted information collected from unknown sensors. Simulation results show that the adapted RL algorithm provides sufficient information to the MB to localise unknown sensors in a lightweight but effective way.", "Classical routing protocols for WMNs are typically designed to achieve specific target objectives (e.g., maximum throughput), and they offer very limited flexibility. As a consequence, more intelligent and adaptive mesh networking solutions are needed to obtain high performance in diverse network conditions. To this end, we propose a reinforcement learning-based routing framework that allows each mesh device to dynamically select at run time a routing protocol from a pre-defined set of routing options, which provides the best performance. The most salient advantages of our solution are: i) it can maximize routing performance considering different optimization goals, ii) it relies on a compact representation of the network state and it does not need any model of its evolution, and iii) it efficiently applies Q-learning methods to guarantee convergence of the routing decision process. Through extensive ns-2 simulations we show the superior performance of the proposed routing approach in comparison with two alternative routing schemes." ] }
1606.02193
2409923045
Monitoring Wireless Sensor Networks (WSNs) are composed of sensor nodes that report temperature, relative humidity, and other environmental parameters. The time between two successive measurements is a critical parameter to set during the WSN configuration because it can impact the WSN's lifetime, the wireless medium contention and the quality of the reported data. As trends in monitored parameters can significantly vary between scenarios and within time, identifying a sampling interval suitable for several cases is also challenging. In this work, we propose a dynamic sampling rate adaptation scheme based on reinforcement learning, able to tune sensors' sampling interval on-the-fly, according to environmental conditions and application requirements. The primary goal is to set the sampling interval to the best value possible so as to avoid oversampling and save energy, while not missing environmental changes that can be relevant for the application. In simulations, our mechanism could reduce up to 73 the total number of transmissions compared to a fixed strategy and, simultaneously, keep the average quality of information provided by the WSN. The inherent flexibility of the reinforcement learning algorithm facilitates its use in several scenarios, so as to exploit the broad scope of the Internet of Things.
WSNs are mainly composed of wireless sensor nodes that make measurements and transmit them to a Gateway ( ). There are many solutions to reduce their number of transmissions, which include clustering, data aggregation, and data prediction. @cite_8 , the authors suggest that future measurements can be predicted by sensor nodes and s . Therefore, sensor nodes only transmit a measurement if they observe that the prediction is not correct, i.e., the real measurement differs by more than a certain threshold from the predicted value. The success of this technique highly depends on the capacity of the sensor nodes to compute efficient prediction methods that will accurately predict future values. However, sensor nodes usually have very limited computing capacities and must rely on s to regularly generate and transmit new predictions. Moreover, s and sensor nodes must share the same knowledge, which requires additional control messages and reduces the benefit of decreasing measurement transmissions, especially if predictions are not sufficiently accurate.
{ "cite_N": [ "@cite_8" ], "mid": [ "2070762331" ], "abstract": [ "In many practical applications of wireless sensor networks, the sensor nodes are required to report approximations of their readings at regular time intervals. For these applications, it has been shown that time series prediction techniques provide an effective way to reduce the communication effort while guaranteeing user-specified accuracy requirements on collected data. Achievable communication savings offered by time series prediction, however, strongly depend on the type of signal sensed, and in practice an inadequate a priori choice of a prediction model can lead to poor prediction performances. We propose in this paper the adaptive model selection algorithm, a lightweight, online algorithm that allows sensor nodes to autonomously determine a statistically good performing model among a set of candidate models. Experimental results obtained on the basis of 14 real-world sensor time series demonstrate the efficiency and versatility of the proposed framework in improving the communication savings." ] }
1606.02193
2409923045
Monitoring Wireless Sensor Networks (WSNs) are composed of sensor nodes that report temperature, relative humidity, and other environmental parameters. The time between two successive measurements is a critical parameter to set during the WSN configuration because it can impact the WSN's lifetime, the wireless medium contention and the quality of the reported data. As trends in monitored parameters can significantly vary between scenarios and within time, identifying a sampling interval suitable for several cases is also challenging. In this work, we propose a dynamic sampling rate adaptation scheme based on reinforcement learning, able to tune sensors' sampling interval on-the-fly, according to environmental conditions and application requirements. The primary goal is to set the sampling interval to the best value possible so as to avoid oversampling and save energy, while not missing environmental changes that can be relevant for the application. In simulations, our mechanism could reduce up to 73 the total number of transmissions compared to a fixed strategy and, simultaneously, keep the average quality of information provided by the WSN. The inherent flexibility of the reinforcement learning algorithm facilitates its use in several scenarios, so as to exploit the broad scope of the Internet of Things.
@cite_7 , the authors propose an approach to answering queries in s without fetching the data directly from sensor nodes. The Principal Component Analysis method was used to analyze historical data and select only the sensor nodes that measured most of the variance observed in the environment. This technique reduced the workload of the sensor nodes and reduced up to $50 in real testbeds. However, the authors do not define how the environment's evolution would be addressed. For example, if the temperature varies more often during the day, it would be necessary more measurements from more sensor nodes during these hours to build datasets that reliably describe the environment.
{ "cite_N": [ "@cite_7" ], "mid": [ "2067757953" ], "abstract": [ "In wireless sensor networks (WSN), a query is commonly used for collecting periodical data from the objects under monitoring. Amount of sensory data drawn across WSNs by a query can significantly impact WSN’s power consumption and its lifetime, since WSNs are battery operated. We present a novel methodology to construct an optimal query containing fewer sensory attributes as compared to a standard query, thereby reducing the sensory traffic in WSN. Our methodology employees a statistical technique, principal component analysis on historical traces of sensory data to automatically identify important attributes among the correlated ones. The optimal query containing reduced set of sensory attributes, guarantees at least 25 reduction in energy consumption of WSN with respect to a standard query. Furthermore, from reduced set of data reported by optimal query, the methodology synthesizes complete set of sensory data at a base station (reporting unit with surplus power supply). We validated the effectiveness of our methodology with real world sensor data. The result shows that our methodology can synthesize complete set of sensory data analogues to standard query with 93 accuracy." ] }