aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1607.01299 | 2471395359 | We study the problem of planning Pareto-optimal journeys in public transit networks. Most existing algorithms and speed-up techniques work by computing subjourneys to intermediary stops until the destination is reached. In contrast, the trip-based model focuses on trips and transfers between them, constructing journeys as a sequence of trips. In this paper, we develop a speed-up technique for this model inspired by principles behind existing state-of-the-art speed-up techniques, Transfer Pattern and Hub Labelling. The resulting algorithm allows us to compute Pareto-optimal (with respect to arrival time and number of transfers) 24-hour profiles on very large real-world networks in less than half a millisecond. Compared to the current state of the art for bicriteria queries on public transit networks, this is up to two orders of magnitude faster, while increasing preprocessing overhead by at most one order of magnitude. | Transfer Patterns (TP) @cite_14 @cite_1 @cite_16 is a speed-up technique that precomputes the eponymous transfer patterns between all stops in the network. These transfer patterns are formed by the sequence of stops where passengers transfer between vehicles. At query time, these patterns are then used to quickly find all Pareto-optimal journeys. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_16"
],
"mid": [
"1489358540",
"2293861670",
"1985418124"
],
"abstract": [
"We show how to route on very large public transportation networks (up to half a billion arcs) with average query times of a few milliseconds. We take into account many realistic features like: traffic days, walking between stations, queries between geographic locations instead of a source and a target station, and multi-criteria cost functions. Our algorithm is based on two key observations: (1) many shortest paths share the same transfer pattern, i.e., the sequence of stations where a change of vehicle occurs; (2) direct connections without change of vehicle can be looked up quickly. We precompute the respective data; in practice, this can be done in time linear in the network size, at the expense of a small fraction of non-optimal results. We have accelerated public transportation routing on Google Maps with a system based on our ideas. We report experimental results for three data sets of various kinds and sizes.",
"We consider the problem of Pareto-optimal route planning in public-transit networks of a whole country, a whole continent, or even the whole world. On such large networks, existing approaches suffer from either a very large space consumption, a very long preprocessing time or slow query processing. Transfer Patterns, a state-of-the-art technique for route planning in transit networks, achieves excellent query times, but the space consumption is large and the preprocessing time is huge. In this paper, we introduce a new scheme for the Transfer Pattern precomputation and query graph construction that reduces both the necessary preprocessing time and space consumption by an order of magnitude and more. Average query times are below 1 ms for local queries, independent of the size of the network, around 30 ms for non-local queries on the complete transit network of Germany, and an estimated 200 ms for a fictitious transit network covering the currently available data of the whole world.",
"We consider the application of route planning in large public-transportation networks (buses, trains, subways, etc). Many connections in such networks are operated at periodic time intervals. When a set of connections has sufficient periodicity, it becomes more efficient to store the time range and frequency (e.g., every 15 minutes from 8:00am-6:00pm) instead of storing each of the time events separately. Identifying an optimal frequency-compression is NP-hard, so we present a time- and space-efficient heuristic. We show how we can use this compression to not only save space but also query time. We particularly consider profile queries, which ask for all optimal routes with departure times in a given interval (e.g., a whole day). In particular, we design a new version of Dijkstra's algorithm that works with frequency-based labels and is suitable for profile queries. We evaluate the savings of our approach on two metropolitan and three country-wide public-transportation networks. On our largest network, we simultaneously achieve a better space consumption than all previous methods as well as profile query times that are about 5 times faster than the best previous method. We also improve Transfer Patterns, a state-of-the-art technique for fully realistic route planning in large public-transportation networks. In particular, we accelerate the expensive preprocessing by a factor of 60 compared to the original publication."
]
} |
1607.01205 | 2468246483 | While recent research in image understanding has often focused on recognizing more types of objects, understanding more about the objects is just as important. Recognizing object parts and attributes has been extensively studied before, yet learning large space of such concepts remains elusive due to the high cost of providing detailed object annotations for supervision. The key contribution of this paper is an algorithm to learn the nameable parts of objects automatically, from images obtained by querying Web search engines. The key challenge is the high level of noise in the annotations; to address it, we propose a new unified embedding space where the appearance and geometry of objects and their semantic parts are represented uniformly. Geometric relationships are induced in a soft manner by a rich set of nonsemantic mid-level anchors, bridging the gap between semantic and non-semantic parts. We also show that the resulting embedding provides a visually-intuitive mechanism to navigate the learned concepts and their corresponding images. | Learning from Web supervision Most previous works @cite_3 @cite_11 @cite_28 @cite_2 that learn from noisy Web images have focused on image classification. Usually, they adopt an iterative approach that jointly learns models and finds clean examples of a target concept. Only few works have looked at the problem of localization. Some approaches @cite_9 @cite_56 discover common segments within a large set of Web images, but they do not quantitatively evaluate localization. The recent method of @cite_57 localizes objects with bounding boxes, and evaluate the learnt models, but as the previous two, it does not consider object parts. Closer to our work, @cite_24 aims at discovering common sense knowledge relations between object categories from Web images, some of which correspond to the part-of'' relation. In the process of organizing the different appearance variations of Webly mined concepts, @cite_44 uses a vocabulary of variance'' that may include part names, but those are not associated to any geometry. | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_3",
"@cite_56",
"@cite_57",
"@cite_24",
"@cite_44",
"@cite_2",
"@cite_11"
],
"mid": [
"2080289064",
"1968731684",
"2172191903",
"1996140089",
"2950931866",
"",
"2081613070",
"2127292559",
""
],
"abstract": [
"The objective of this work is to automatically generate a large number of images for a specified object class (for example, penguin). A multi-modal approach employing both text, meta data and visual features is used to gather many, high-quality images from the Web. Candidate images are obtained by a text based Web search querying on the object identifier (the word penguin). The Web pages and the images they contain are downloaded. The task is then to remove irrelevant images and re-rank the remainder. First, the images are re-ranked using a Bayes posterior estimator trained on the text surrounding the image and meta data features (such as the image alternative tag, image title tag, and image filename). No visual information is used at this stage. Second, the top-ranked images are used as (noisy) training data and a SVM visual classifier is learnt to improve the ranking further. The principal novelty is in combining text meta-data and visual features in order to achieve a completely automatic ranking of the images. Examples are given for a selection of animals (e.g. camels, sharks, penguins), vehicles (cars, airplanes, bikes) and other classes (guitar, wristwatch), totalling 18 classes. The results are assessed by precision recall curves on ground truth annotated data and by comparison to previous approaches including those of (on an additional six classes) and",
"In this paper, we address a challenging image segmentation problem called multiple foreground cosegmentation (MFC), which concerns a realistic scenario in general Webuser photo sets where a finite number of K foregrounds of interest repeatedly occur cross the entire photo set, but only an unknown subset of them is presented in each image. This contrasts the classical cosegmentation problem dealt with by most existing algorithms, which assume a much simpler but less realistic setting where the same set of foregrounds recurs in every image. We propose a novel optimization method for MFC, which makes no assumption on foreground configurations and does not suffer from the aforementioned limitation, while still leverages all the benefits of having co-occurring or (partially) recurring contents across images. Our method builds on an iterative scheme that alternates between a foreground modeling module and a region assignment module, both highly efficient and scalable. In particular, our approach is flexible enough to integrate any advanced region classifiers for foreground modeling, and our region assignment employs a combinatorial auction framework that enjoys several intuitively good properties such as optimality guarantee and linear complexity. We show the superior performance of our method in both segmentation quality and scalability in comparison with other state-of-the-art techniques on a newly introduced FlickrMFC dataset and the standard ImageNet dataset.",
"Current approaches to object category recognition require datasets of training images to be manually prepared, with varying degrees of supervision. We present an approach that can learn an object category from just its name, by utilizing the raw output of image search engines available on the Internet. We develop a new model, TSI-pLSA, which extends pLSA (as applied to visual words) to include spatial information in a translation and scale invariant manner. Our approach can handle the high intra-class variability and large proportion of unrelated images returned by search engines. We evaluate tire models on standard test sets, showing performance competitive with existing methods trained on hand prepared datasets",
"We present a new unsupervised algorithm to discover and segment out common objects from large and diverse image collections. In contrast to previous co-segmentation methods, our algorithm performs well even in the presence of significant amounts of noise images (images not containing a common object), as typical for datasets collected from Internet search. The key insight to our algorithm is that common object patterns should be salient within each image, while being sparse with respect to smooth transformations across other images. We propose to use dense correspondences between images to capture the sparsity and visual variability of the common object over the entire database, which enables us to ignore noise objects that may be salient within their own images but do not commonly occur in others. We performed extensive numerical evaluation on established co-segmentation datasets, as well as several new datasets generated using Internet search. Our approach is able to effectively segment out the common object for diverse object categories, while naturally identifying images where the common object is not present.",
"We present an approach to utilize large amounts of web data for learning CNNs. Specifically inspired by curriculum learning, we present a two-step approach for CNN training. First, we use easy images to train an initial visual representation. We then use this initial CNN and adapt it to harder, more realistic images by leveraging the structure of data and categories. We demonstrate that our two-stage CNN outperforms a fine-tuned CNN trained on ImageNet on Pascal VOC 2012. We also demonstrate the strength of webly supervised learning by localizing objects in web images and training a R-CNN style detector. It achieves the best performance on VOC 2007 where no VOC training data is used. Finally, we show our approach is quite robust to noise and performs comparably even when we use image search results from March 2013 (pre-CNN image search era).",
"",
"Recognition is graduating from labs to real-world applications. While it is encouraging to see its potential being tapped, it brings forth a fundamental challenge to the vision researcher: scalability. How can we learn a model for any concept that exhaustively covers all its appearance variations, while requiring minimal or no human supervision for compiling the vocabulary of visual variance, gathering the training images and annotations, and learning the models? In this paper, we introduce a fully-automated approach for learning extensive models for a wide range of variations (e.g. actions, interactions, attributes and beyond) within any concept. Our approach leverages vast resources of online books to discover the vocabulary of variance, and intertwines the data collection and modeling steps to alleviate the need for explicit human supervision in training the models. Our approach organizes the visual knowledge about a concept in a convenient and useful way, enabling a variety of applications across vision and NLP. Our online system has been queried by users to learn models for several interesting concepts including breakfast, Gandhi, beautiful, etc. To date, our system has models available for over 50, 000 variations within 150 concepts, and has annotated more than 10 million images with bounding boxes.",
"We address the problem of large-scale annotation of web images. Our approach is based on the concept of visual synset, which is an organization of images which are visually-similar and semantically-related. Each visual synset represents a single prototypical visual concept, and has an associated set of weighted annotations. Linear SVM's are utilized to predict the visual synset membership for unseen image examples, and a weighted voting rule is used to construct a ranked list of predicted annotations from a set of visual synsets. We demonstrate that visual synsets lead to better performance than standard methods on a new annotation database containing more than 200 million im- ages and 300 thousand annotations, which is the largest ever reported",
""
]
} |
1607.01205 | 2468246483 | While recent research in image understanding has often focused on recognizing more types of objects, understanding more about the objects is just as important. Recognizing object parts and attributes has been extensively studied before, yet learning large space of such concepts remains elusive due to the high cost of providing detailed object annotations for supervision. The key contribution of this paper is an algorithm to learn the nameable parts of objects automatically, from images obtained by querying Web search engines. The key challenge is the high level of noise in the annotations; to address it, we propose a new unified embedding space where the appearance and geometry of objects and their semantic parts are represented uniformly. Geometric relationships are induced in a soft manner by a rich set of nonsemantic mid-level anchors, bridging the gap between semantic and non-semantic parts. We also show that the resulting embedding provides a visually-intuitive mechanism to navigate the learned concepts and their corresponding images. | Unsupervised parts, mid-level features, and semantic parts Objects are modeled using the notion of since the early work on pictorial structure @cite_4 , in the constellation @cite_20 and ISM @cite_33 models, and more recently the DPM @cite_60 . Parts are most commonly defined as localized components with consistent appearance and geometry in an object. All these works have in common to discover object parts without naming them. In practice, only some of these parts have an actual semantic interpretation. @cite_50 @cite_42 @cite_45 @cite_37 @cite_26 @cite_22 are discriminative @cite_37 @cite_0 or rare @cite_50 blocks, which are leveraged for object recognition. Again, these parts lack semantic. The non-semantic anchors that we use share similarities with @cite_52 and @cite_26 , that we discuss in s:mid-level . parts have triggered recent interest @cite_38 @cite_25 @cite_48 . These works require strong annotations in the form of bounding boxes @cite_38 or segmentation masks @cite_25 @cite_48 at the part level. Here we depart from existing work and aim at mining semantic nameable parts with as little supervision as possible. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_26",
"@cite_4",
"@cite_33",
"@cite_22",
"@cite_60",
"@cite_48",
"@cite_42",
"@cite_52",
"@cite_0",
"@cite_45",
"@cite_50",
"@cite_25",
"@cite_20"
],
"mid": [
"",
"2111650570",
"343636949",
"2030536784",
"1995444699",
"1933900011",
"2168356304",
"792160549",
"2115628259",
"2304841027",
"12634471",
"2033832873",
"1590510366",
"2104408738",
"2154422044"
],
"abstract": [
"",
"We propose a method to learn a diverse collection of discriminative parts from object bounding box annotations. Part detectors can be trained and applied individually, which simplifies learning and extension to new features or categories. We apply the parts to object category detection, pooling part detections within bottom-up proposed regions and using a boosted classifier with proposed sigmoid weak learners for scoring. On PASCAL VOC 2010, we evaluate the part detectors' ability to discriminate and localize annotated key points. Our detection system is competitive with the best-existing systems, outperforming other HOG-based detectors on the more deformable categories.",
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [19] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.",
"This paper presents a novel method for detecting and localizing objects of a visual category in cluttered real-world scenes. Our approach considers object categorization and figure-ground segmentation as two interleaved processes that closely collaborate towards a common goal. As shown in our work, the tight coupling between those two processes allows them to benefit from each other and improve the combined performance. The core part of our approach is a highly flexible learned representation for object shape that can combine the information observed on different training examples in a probabilistic extension of the Generalized Hough Transform. The resulting approach can detect categorical objects in novel images and automatically infer a probabilistic segmentation from the recognition result. This segmentation is then in turn used to again improve recognition by allowing the system to focus its efforts on object pixels and to discard misleading influences from the background. Moreover, the information from where in the image a hypothesis draws its support is employed in an MDL based hypothesis verification stage to resolve ambiguities between overlapping hypotheses and factor out the effects of partial occlusion. An extensive evaluation on several large data sets shows that the proposed system is applicable to a range of different object categories, including both rigid and articulated objects. In addition, its flexible representation allows it to achieve competitive object detection performance already from training sets that are between one and two orders of magnitude smaller than those used in comparable systems.",
"Mid-level visual element discovery aims to find clusters of image patches that are both representative and discriminative. In this work, we study this problem from the prospective of pattern mining while relying on the recently popularized Convolutional Neural Networks (CNNs). Specifically, we find that for an image patch, activation extracted from the first fully-connected layer of a CNN have two appealing properties which enable its seamless integration with pattern mining. Patterns are then discovered from a large number of CNN activations of image patches through the well-known association rule mining. When we retrieve and visualize image patches with the same pattern (See Fig. 1), surprisingly, they are not only visually similar but also semantically consistent. We apply our approach to scene and object classification tasks, and demonstrate that our approach outperforms all previous works on mid-level visual element discovery by a sizeable margin with far fewer elements being used. Our approach also outperforms or matches recent works using CNN for these tasks. Source code of the complete system is available online.",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"Segmenting semantic objects from images and parsing them into their respective semantic parts are fundamental steps towards detailed object understanding in computer vision. In this paper, we propose a joint solution that tackles semantic object and part segmentation simultaneously, in which higher object-level context is provided to guide part segmentation, and more detailed part-level localization is utilized to refine object segmentation. Specifically, we first introduce the concept of semantic compositional parts (SCP) in which similar semantic parts are grouped and shared among different objects. A two-channel fully convolutional network (FCN) is then trained to provide the SCP and object potentials at each pixel. At the same time, a compact set of segments can also be obtained from the SCP predictions of the network. Given the potentials and the generated segments, in order to explore long-range context, we finally construct an efficient fully connected conditional random field (FCRF) to jointly predict the final object and part labels. Extensive evaluation on three different datasets shows that our approach can mutually enhance the performance of object and part segmentation, and outperforms the current state-of-the-art on both tasks.",
"Recent work on mid-level visual representations aims to capture information at the level of complexity higher than typical \"visual words\", but lower than full-blown semantic objects. Several approaches [5,6,12,23] have been proposed to discover mid-level visual elements, that are both 1) representative, i.e., frequently occurring within a visual dataset, and 2) visually discriminative. However, the current approaches are rather ad hoc and difficult to analyze and evaluate. In this work, we pose visual element discovery as discriminative mode seeking, drawing connections to the the well-known and well-studied mean-shift algorithm [2, 1, 4, 8]. Given a weakly-labeled image collection, our method discovers visually-coherent patch clusters that are maximally discriminative with respect to the labels. One advantage of our formulation is that it requires only a single pass through the data. We also propose the Purity-Coverage plot as a principled way of experimentally analyzing and evaluating different visual discovery approaches, and compare our method against prior work on the Paris Street View dataset of [5]. We also evaluate our method on the task of scene classification, demonstrating state-of-the-art performance on the MIT Scene-67 dataset.",
"This paper proposes a novel approach to learning mid-level image models for image categorization and cosegmentation. We represent each image class by a dictionary of part detectors that best discriminate that class from the background. We learn category-specific part detectors in a weakly supervised setting in which the training images are only annotated with category labels without part object location information. We use a latent SVM model regularized using the @math l2,1 group sparsity norm to learn the part detectors. Starting from a large set of initial parts, the group sparsity regularizer forces the model to jointly select and optimize a set of discriminative part detectors in a max-margin framework. We propose a stochastic version of a proximal algorithm to solve the corresponding optimization problem. We apply the learned part detectors to image classification and cosegmentation, and present extensive comparative experiments with standard benchmarks.",
"In this paper we address the problem of automatically recognizing pictured dishes. To this end, we introduce a novel method to mine discriminative parts using Random Forests (rf), which allows us to mine for parts simultaneously for all classes and to share knowledge among them. To improve efficiency of mining and classification, we only consider patches that are aligned with image superpixels, which we call components. To measure the performance of our rf component mining for food recognition, we introduce a novel and challenging dataset of 101 food categories, with 101’000 images. With an average accuracy of 50.76 , our model outperforms alternative classification methods except for cnn, including svm classification on Improved Fisher Vectors and existing discriminative part-mining algorithms by 11.88 and 8.13 , respectively. On the challenging mit-Indoor dataset, our method compares nicely to other s-o-a component-based classification methods.",
"The automatic discovery of distinctive parts for an object or scene class is challenging since it requires simultaneously to learn the part appearance and also to identify the part occurrences in images. In this paper, we propose a simple, efficient, and effective method to do so. We address this problem by learning parts incrementally, starting from a single part occurrence with an Exemplar SVM. In this manner, additional part instances are discovered and aligned reliably before being considered as training examples. We also propose entropy-rank curves as a means of evaluating the distinctiveness of parts shareable between categories and use them to select useful parts out of a set of candidates. We apply the new representation to the task of scene categorisation on the MIT Scene 67 benchmark. We show that our method can learn parts which are significantly more informative and for a fraction of the cost, compared to previous part-learning methods such as [28]. We also show that a well constructed bag of words or Fisher vector model can substantially outperform the previous state-of-the-art classification performance on this data.",
"The goal of this paper is to discover a set of discriminative patches which can serve as a fully unsupervised mid-level visual representation. The desired patches need to satisfy two requirements: 1) to be representative, they need to occur frequently enough in the visual world; 2) to be discriminative, they need to be different enough from the rest of the visual world. The patches could correspond to parts, objects, \"visual phrases\", etc. but are not restricted to be any one of them. We pose this as an unsupervised discriminative clustering problem on a huge dataset of image patches. We use an iterative procedure which alternates between clustering and training discriminative classifiers, while applying careful cross-validation at each step to prevent overfitting. The paper experimentally demonstrates the effectiveness of discriminative patches as an unsupervised mid-level visual representation, suggesting that it could be used in place of visual words for many tasks. Furthermore, discriminative patches can also be used in a supervised regime, such as scene classification, where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset.",
"Detecting objects becomes difficult when we need to deal with large shape deformation, occlusion and low resolution. We propose a novel approach to i) handle large deformations and partial occlusions in animals (as examples of highly deformable objects), ii) describe them in terms of body parts, and iii) detect them when their body parts are hard to detect (e.g., animals depicted at low resolution). We represent the holistic object and body parts separately and use a fully connected model to arrange templates for the holistic object and body parts. Our model automatically decouples the holistic object or body parts from the model when they are hard to detect. This enables us to represent a large number of holistic object and body part combinations to better deal with different \"detectability\" patterns caused by deformations, occlusion and or low resolution. We apply our method to the six animal categories in the PASCAL VOC dataset and show that our method significantly improves state-of-the-art (by 4.1 AP) and provides a richer representation for objects. During training we use annotations for body parts (e.g., head, torso, etc.), making use of a new dataset of fully annotated object parts for PASCAL VOC 2010, which provides a mask for each part.",
"We present a method to learn and recognize object class models from unlabeled and unsegmented cluttered scenes in a scale invariant manner. Objects are modeled as flexible constellations of parts. A probabilistic representation is used for all aspects of the object: shape, appearance, occlusion and relative scale. An entropy-based feature detector is used to select regions and their scale within the image. In learning the parameters of the scale-invariant object model are estimated. This is done using expectation-maximization in a maximum-likelihood setting. In recognition, this model is used in a Bayesian manner to classify images. The flexible nature of the model is demonstrated by excellent results over a range of datasets including geometrically constrained classes (e.g. faces, cars) and flexible objects (such as animals)."
]
} |
1607.00659 | 2461217441 | This paper presents a novel Robust Deep Appearance Models to learn the non-linear correlation between shape and texture of face images. In this approach, two crucial components of face images, i.e. shape and texture, are represented by Deep Boltzmann Machines and Robust Deep Boltzmann Machines (RDBM), respectively. The RDBM, an alternative form of Robust Boltzmann Machines, can separate corrupted occluded pixels in the texture modeling to achieve better reconstruction results. The two models are connected by Restricted Boltzmann Machines at the top layer to jointly learn and capture the variations of both facial shapes and appearances. This paper also introduces new fitting algorithms with occlusion awareness through the mask obtained from the RDBM reconstruction. The proposed approach is evaluated in various applications by using challenging face datasets, i.e. Labeled Face Parts in the Wild (LFPW), Helen, EURECOM and AR databases, to demonstrate its robustness and capabilities. | This section reviews Restricted Boltzmann Machines @cite_11 and its extensions. Recent advances in AAMs-based facial modeling and fitting approaches are reviewed in this section. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2116064496"
],
"abstract": [
"It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual \"expert\" models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called \"contrastive divergence\" whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data."
]
} |
1607.00659 | 2461217441 | This paper presents a novel Robust Deep Appearance Models to learn the non-linear correlation between shape and texture of face images. In this approach, two crucial components of face images, i.e. shape and texture, are represented by Deep Boltzmann Machines and Robust Deep Boltzmann Machines (RDBM), respectively. The RDBM, an alternative form of Robust Boltzmann Machines, can separate corrupted occluded pixels in the texture modeling to achieve better reconstruction results. The two models are connected by Restricted Boltzmann Machines at the top layer to jointly learn and capture the variations of both facial shapes and appearances. This paper also introduces new fitting algorithms with occlusion awareness through the mask obtained from the RDBM reconstruction. The proposed approach is evaluated in various applications by using challenging face datasets, i.e. Labeled Face Parts in the Wild (LFPW), Helen, EURECOM and AR databases, to demonstrate its robustness and capabilities. | Restricted Boltzmann Machines (RBM) @cite_11 are an undirected graphical model with two layers of stochastic units, i.e. visible and hidden units, which represent the observed data and the conditional representation of that data, respectively. Visible and hidden units are connected by weighted undirected edges. Gaussian RBM @cite_12 models real-valued data by assuming the visible units have real values normally distributed with mean @math and variance @math . Moreover, a set of RBMs can be stacked on top of another to capture more complicated correlations between features in the lower layer. This approach produces a deeper network called Deep Boltzmann Machines @cite_5 . RoBM @cite_15 were proposed to estimate noise and learn features simultaneously by distinguishing corrupted and uncorrupted pixels to find optimal latent representations. | {
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_12",
"@cite_11"
],
"mid": [
"189596042",
"2054814877",
"",
"2116064496"
],
"abstract": [
"We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. Data-dependent expectations are estimated using a variational approximation that tends to focus on a single mode, and dataindependent expectations are approximated using persistent Markov chains. The use of two quite different techniques for estimating the two types of expectation that enter into the gradient of the log-likelihood makes it practical to learn Boltzmann machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer “pre-training” phase that allows variational inference to be initialized with a single bottomup pass. We present results on the MNIST and NORB datasets showing that deep Boltzmann machines learn good generative models and perform well on handwritten digit and visual object recognition tasks.",
"While Boltzmann Machines have been successful at unsupervised learning and density modeling of images and speech data, they can be very sensitive to noise in the data. In this paper, we introduce a novel model, the Robust Boltzmann Machine (RoBM), which allows Boltzmann Machines to be robust to corruptions. In the domain of visual recognition, the RoBM is able to accurately deal with occlusions and noise by using multiplicative gating to induce a scale mixture of Gaussians over pixels. Image denoising and in-painting correspond to posterior inference in the RoBM. Our model is trained in an unsupervised fashion with unlabeled noisy data and can learn the spatial structure of the occluders. Compared to standard algorithms, the RoBM is significantly better at recognition and denoising on several face databases.",
"",
"It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual \"expert\" models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called \"contrastive divergence\" whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data."
]
} |
1607.00718 | 2473558390 | In this work, we introduce temporal hierarchies to the sequence to sequence (seq2seq) model to tackle the problem of abstractive summarization of scientific articles. The proposed Multiple Timescale model of the Gated Recurrent Unit (MTGRU) is implemented in the encoder-decoder setting to better deal with the presence of multiple compositionalities in larger texts. The proposed model is compared to the conventional RNN encoder-decoder, and the results demonstrate that our model trains faster and shows significant performance gains. The results also show that the temporal hierarchies help improve the ability of seq2seq models to capture compositionalities better without the presence of highly complex architectural hierarchies. | Summarization has been extensively researched over the past several decades. and offer excellent overviews of the field. Broadly, summarization methods can be categorized into extractive approaches and abstractive approaches @cite_10 , based on the type of computational task. Extractive summarization is a selection problem, while abstractive summarization requires a deeper semantic and discourse understanding of the text, as well as a novel text generation process. Extractive summarization has been the focus in the past, but abstractive summarization remains a challenge. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2066350381"
],
"abstract": [
"Summarization, the art of abstracting key content from one or more information sources, has become an integral part of everyday life. Researchers are investigating summarization tools and methods that automatically extract or abstract content from a range of information sources, including multimedia. Researchers are looking at approaches which roughly fall into two categories. Knowledge-poor approaches rely on not having to add new rules for each new application domain or language. Knowledge-rich approaches assume that if you grasp the meaning of the text, you can reduce it more effectively, thus yielding a better summary. Some approaches use a hybrid. In both methods, the main constraint is the compression requirement. High reduction rates pose a challenge because they are hard to attain without a reasonable amount of background knowledge. Another challenge is how to evaluate summarizers. If you are to trust that the summary is indeed a reliable substitute for the source, you must be confident that it does in fact reflect what is relevant in that source. Hence, methods for creating and evaluating summaries must complement each other."
]
} |
1607.00718 | 2473558390 | In this work, we introduce temporal hierarchies to the sequence to sequence (seq2seq) model to tackle the problem of abstractive summarization of scientific articles. The proposed Multiple Timescale model of the Gated Recurrent Unit (MTGRU) is implemented in the encoder-decoder setting to better deal with the presence of multiple compositionalities in larger texts. The proposed model is compared to the conventional RNN encoder-decoder, and the results demonstrate that our model trains faster and shows significant performance gains. The results also show that the temporal hierarchies help improve the ability of seq2seq models to capture compositionalities better without the presence of highly complex architectural hierarchies. | In this paper, we present a first, intermediate step towards end-to-end abstractive summarization of scientific articles. Our aim is to extend seq2seq based summarization to larger text with a more complex summarization task. To address each of the issues above, 1) We propose a paragraph-wise summarization system, which is trained via paragraph-salient sentence pairs. We use Term Frequency-Inverse Document Frequency (TF-IDF) @cite_13 @cite_5 scores to extract a salient sentence from each paragraph. 2) We introduce a novel model, Multiple Timescale Gated Recurrent Unit (MTGRU), which adds a temporal hierarchy component that serves to handle multiple levels of compositionality. This is inspired by an analogous concept of temporal hierarchical organization found in the human brain, and is implemented by modulating different layers of the multilayer RNN with different timescales @cite_1 . We demonstrate that our model is capable of understanding the semantics of a multi-sentence source text and knowing what is important about it, which is the first necessary step towards abstractive summarization. 3) We build a new dataset of Computer Science (CS) articles from ArXiv.org, extracting their Introductions from the LaTeX source files. The Introductions are decomposed into paragraphs, each paragraph acting as a natural unit of discourse. | {
"cite_N": [
"@cite_1",
"@cite_5",
"@cite_13"
],
"mid": [
"2028418738",
"2144211451",
"1974339500"
],
"abstract": [
"It is generally thought that skilled behavior in human beings results from a functional hierarchy of the motor control system, within which reusable motor primitives are flexibly integrated into various sensori-motor sequence patterns. The underlying neural mechanisms governing the way in which continuous sensori-motor flows are segmented into primitives and the way in which series of primitives are integrated into various behavior sequences have, however, not yet been clarified. In earlier studies, this functional hierarchy has been realized through the use of explicit hierarchical structure, with local modules representing motor primitives in the lower level and a higher module representing sequences of primitives switched via additional mechanisms such as gate-selecting. When sequences contain similarities and overlap, however, a conflict arises in such earlier models between generalization and segmentation, induced by this separated modular structure. To address this issue, we propose a different type of neural network model. The current model neither makes use of separate local modules to represent primitives nor introduces explicit hierarchical structure. Rather than forcing architectural hierarchy onto the system, functional hierarchy emerges through a form of self-organization that is based on two distinct types of neurons, each with different time properties (‘‘multiple timescales’’). Through the introduction of multiple timescales, continuous sequences of behavior are segmented into reusable primitives, and the primitives, in turn, are flexibly integrated into novel sequences. In experiments, the proposed network model, coordinating the physical body of a humanoid robot through high-dimensional sensori-motor control, also successfully situated itself within a physical environment. Our results suggest that it is not only the spatial connections between neurons but also the timescales of neural activity that act as important mechanisms leading to functional hierarchy in neural systems.",
"The exhaustivity of document descriptions and the specificity of index terms are usually regarded as independent. It is suggested that specificity should be interpreted statistically, as a function of term use rather than of term meaning. The effects on retrieval of variations in term specificity are examined, experiments with three test collections showing in particular that frequently‐occurring terms are required for good overall performance. It is argued that terms should be weighted according to collection frequency, so that matches on less frequent, more specific, terms are of greater value than matches on frequent terms. Results for the test collections show that considerable improvements in performance are obtained with this very simple procedure.",
"Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means. In the exploratory research described, the complete text of an article in machine-readable form is scanned by an IBM 704 data-processing machine and analyzed in accordance with a standard program. Statistical information derived from word frequency and distribution is used by the machine to compute a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the \"auto-abstract.\""
]
} |
1607.00537 | 2474822714 | To incentivize users' participations and steer their online activities, online social networks start to provide users with various kinds of rewards for their contributions to the sites. The most frequently distributed rewards include account levels, reputation scores, different kinds of badges, and even material awards like small gifts and cash back, etc. Attracted by these rewards, users will spend more time using the network services. In this paper, we will mainly focus on "badges reward systems" but the proposed models can be applied to other reward systems as well. Badges are small icons attached to users' homepages and profiles denoting their achievements. People like to accumulate badge for various reasons and different badges can have specific values for them. Meanwhile, to get badges, they also need to exert efforts to finish the required tasks, which can lead to certain costs. To understand and model users' motivations in badge achievement activities, we will study an existing badge system launched inside a real-world online social network, Foursquare, in this paper. At the same time, to maximize users' contributions to online social networks, social network system designers need to determine the optimal badge system mechanism carefully. Badge system mechanism describes various detailed aspects of the system and can involve many parameters, e.g., categories of existing badges, number of badges available as well as the minimum contributions required to obtain the badges, which all need to be designed with meticulous investigations. Based on the model of users' badges accumulating activities, in this paper, we will also study how to design the badge system that can incentivize the maximum users' contributions to the social networks. | To achieve the maximal contribution to the sites, many works have been done on designing the badge system for online social networks. Jain et. al. @cite_2 study the problem of incentive design for online question and answer sites. Anderson et. al. @cite_12 study how badges can influence and steer users behavior on social networks, which can lead both to increased participation and to changes in the mix of activities a user pursues in the network. Ghosh et. al. @cite_13 study the problem of implementing a mechanism which can lead to optimal outcomes in social computing based on a game-theoretic approach. Immorlica et. al. @cite_16 study the badge system design whose goal is to maximize contributions. Easley et. al. @cite_3 take a game-theoretic approach to badge design, analyzing the incentives created by badges and potential contributors as well as their contribution to the sites. | {
"cite_N": [
"@cite_3",
"@cite_2",
"@cite_16",
"@cite_13",
"@cite_12"
],
"mid": [
"2025280459",
"2124810512",
"2951092054",
"2951605358",
"2148527862"
],
"abstract": [
"Gamification is growing increasingly prevalent as a means to incentivize user engagement of social media sites that rely on user contributions. Badges, or equivalent rewards, such as top-contributor lists that are used to recognize a user's contributions on a site, clearly appear to be valued by users who actively pursue and compete for them. However, different sites use different badge designs, varying how, and for what, badges are awarded. Some sites, such as StackOverflow, award badges for meeting fixed levels of contribution. Other sites, such as Amazon and Y! Answers, reward users for being among some top set of contributors on the site, corresponding to a competitive standard of performance. Given that users value badges, and that contributing to a site requires effort, how badges are designed will affect the incentives—therefore the participation and effort—elicited from strategic users on a site. We take a game-theoretic approach to badge design, analyzing the incentives created by widely used badge designs in a model in which winning a badge is valued, effort is costly, and potential contributors to the site endogenously decide whether or not to participate, and how much total effort to put into their contributions to the site. We analyze equilibrium existence, as well as equilibrium participation and effort, in an absolute standards mechanism Mα in which badges are awarded for meeting some absolute level of (observed) effort, and a relative standards mechanism Mρ corresponding to competitive standards, as in a top-ρ contributor badge. We find that equilibria always exist in both mechanisms, even when the value from winning a badge depends endogenously on the number of other winners. However, Mα has zero-participation equilibria for standards that are too high, whereas all equilibria in Mρ elicit nonzero participation for all possible ρ, provided that ρ is specified as a fixed number rather than as a fraction of actual contributors (note that the two are not equivalent in a setting with endogenous participation). Finally, we ask whether or not a site should explicitly announce the number of users winning a badge. The answer to this question is determined by the curvature of the value of winning the badge as a function of the number of other winners.",
"In this paper, we provide a simple game-theoretic model of an online question and answer forum. We focus on factual questions in which user responses aggregate while a question remains open. Each user has a unique piece of information and can decide when to report this information. The asker prefers to receive information sooner rather than later, and will stop the process when satisfied with the cumulative value of the posted information. We consider two distinct cases: a complements case, in which each successive piece of information is worth more to the asker than the previous one; and a substitutes case, in which each successive piece of information is worth less than the previous one. A best-answer scoring rule is adopted to model Yahoo! Answers, and is effective for substitutes information, where it isolates an equilibrium in which all users respond in the first round. But we find that this rule is ineffective for complements information, isolating instead an equilibrium in which all users respond in the final round. In addressing this, we demonstrate that an approval-voting scoring rule and a proportional-share scoring rule can enable the most efficient equilibrium with complements information, under certain conditions, by providing incentives for early responders as well as the user who submits the final answer.",
"Many websites rely on user-generated content to provide value to consumers. These websites typically incentivize participation by awarding users badges based on their contributions. While these badges typically have no explicit value, they act as symbols of social status within a community. In this paper, we consider the design of badge mechanisms for the objective of maximizing the total contributions made to a website. Users exert costly effort to make contributions and, in return, are awarded with badges. A badge is only valued to the extent that it signals social status and thus badge valuations are determined endogenously by the number of users who earn each badge. The goal of this paper is to study the design of optimal and approximately badge mechanisms under these status valuations. We characterize badge mechanisms by whether they use a coarse partitioning scheme, i.e. awarding the same badge to many users, or use a fine partitioning scheme, i.e. awarding a unique badge to most users. We find that the optimal mechanism uses both fine partitioning and coarse partitioning. When status valuations exhibit a decreasing marginal value property, we prove that coarse partitioning is a necessary feature of any approximately optimal mechanism. Conversely, when status valuations exhibit an increasing marginal value property, we prove that fine partitioning is necessary for approximate optimality.",
"In many social computing applications such as online Q&A forums, the best contribution for each task receives some high reward, while all remaining contributions receive an identical, lower reward irrespective of their actual qualities. Suppose a mechanism designer (site owner) wishes to optimize an objective that is some function of the number and qualities of received contributions. When potential contributors are strategic agents, who decide whether to contribute or not to selfishly maximize their own utilities, is such a \"best contribution\" mechanism, M_B, adequate to implement an outcome that is optimal for the mechanism designer? We first show that in settings where a contribution's value is determined primarily by an agent's expertise, and agents only strategically choose whether to contribute or not, contests can implement optimal outcomes: for any reasonable objective, the rewards for the best and remaining contributions in M_B can always be chosen so that the outcome in the unique symmetric equilibrium of M_B maximizes the mechanism designer's utility. We also show how the mechanism designer can learn these optimal rewards when she does not know the parameters of the agents' utilities, as might be the case in practice. We next consider settings where a contribution's value depends on both the contributor's expertise as well as her effort, and agents endogenously choose how much effort to exert in addition to deciding whether to contribute. Here, we show that optimal outcomes can never be implemented by contests if the system can rank the qualities of contributions perfectly. However, if there is noise in the contributions' rankings, then the mechanism designer can again induce agents to follow strategies that maximize his utility. Thus imperfect rankings can actually help achieve implementability of optimal outcomes when effort is endogenous and influences quality.",
"An increasingly common feature of online communities and social media sites is a mechanism for rewarding user achievements based on a system of badges. Badges are given to users for particular contributions to a site, such as performing a certain number of actions of a given type. They have been employed in many domains, including news sites like the Huffington Post, educational sites like Khan Academy, and knowledge-creation sites like Wikipedia and Stack Overflow. At the most basic level, badges serve as a summary of a user's key accomplishments; however, experience with these sites also shows that users will put in non-trivial amounts of work to achieve particular badges, and as such, badges can act as powerful incentives. Thus far, however, the incentive structures created by badges have not been well understood, making it difficult to deploy badges with an eye toward the incentives they are likely to create. In this paper, we study how badges can influence and steer user behavior on a site---leading both to increased participation and to changes in the mix of activities a user pursues on the site. We introduce a formal model for reasoning about user behavior in the presence of badges, and in particular for analyzing the ways in which badges can steer users to change their behavior. To evaluate the main predictions of our model, we study the use of badges and their effects on the widely used Stack Overflow question-answering site, and find evidence that their badges steer behavior in ways closely consistent with the predictions of our model. Finally, we investigate the problem of how to optimally place badges in order to induce particular user behaviors. Several robust design principles emerge from our framework that could potentially aid in the design of incentives for a broad range of sites."
]
} |
1607.00537 | 2474822714 | To incentivize users' participations and steer their online activities, online social networks start to provide users with various kinds of rewards for their contributions to the sites. The most frequently distributed rewards include account levels, reputation scores, different kinds of badges, and even material awards like small gifts and cash back, etc. Attracted by these rewards, users will spend more time using the network services. In this paper, we will mainly focus on "badges reward systems" but the proposed models can be applied to other reward systems as well. Badges are small icons attached to users' homepages and profiles denoting their achievements. People like to accumulate badge for various reasons and different badges can have specific values for them. Meanwhile, to get badges, they also need to exert efforts to finish the required tasks, which can lead to certain costs. To understand and model users' motivations in badge achievement activities, we will study an existing badge system launched inside a real-world online social network, Foursquare, in this paper. At the same time, to maximize users' contributions to online social networks, social network system designers need to determine the optimal badge system mechanism carefully. Badge system mechanism describes various detailed aspects of the system and can involve many parameters, e.g., categories of existing badges, number of badges available as well as the minimum contributions required to obtain the badges, which all need to be designed with meticulous investigations. Based on the model of users' badges accumulating activities, in this paper, we will also study how to design the badge system that can incentivize the maximum users' contributions to the social networks. | The problem studied in this paper is a novel problem and different from existing works on reward system analysis: (1) steering user behavior with badges'' @cite_12 , which studies the incentives of badges in guiding users online activities without considering the effects of social connections among users; (2) social status and the design of optimal badges'' @cite_16 , which provides theoretical derivations of the optimal badge system design problem but fails to consider the game among users and the game between users and badge system designer; and (3) implementing optimal outcomes in social computing: a game-theoretic approach'' @cite_13 , which tries to use a game theory based method to analyze the motivations of users in getting badges but doesn't consider the badge system design'' problem. | {
"cite_N": [
"@cite_16",
"@cite_13",
"@cite_12"
],
"mid": [
"2951092054",
"2951605358",
"2148527862"
],
"abstract": [
"Many websites rely on user-generated content to provide value to consumers. These websites typically incentivize participation by awarding users badges based on their contributions. While these badges typically have no explicit value, they act as symbols of social status within a community. In this paper, we consider the design of badge mechanisms for the objective of maximizing the total contributions made to a website. Users exert costly effort to make contributions and, in return, are awarded with badges. A badge is only valued to the extent that it signals social status and thus badge valuations are determined endogenously by the number of users who earn each badge. The goal of this paper is to study the design of optimal and approximately badge mechanisms under these status valuations. We characterize badge mechanisms by whether they use a coarse partitioning scheme, i.e. awarding the same badge to many users, or use a fine partitioning scheme, i.e. awarding a unique badge to most users. We find that the optimal mechanism uses both fine partitioning and coarse partitioning. When status valuations exhibit a decreasing marginal value property, we prove that coarse partitioning is a necessary feature of any approximately optimal mechanism. Conversely, when status valuations exhibit an increasing marginal value property, we prove that fine partitioning is necessary for approximate optimality.",
"In many social computing applications such as online Q&A forums, the best contribution for each task receives some high reward, while all remaining contributions receive an identical, lower reward irrespective of their actual qualities. Suppose a mechanism designer (site owner) wishes to optimize an objective that is some function of the number and qualities of received contributions. When potential contributors are strategic agents, who decide whether to contribute or not to selfishly maximize their own utilities, is such a \"best contribution\" mechanism, M_B, adequate to implement an outcome that is optimal for the mechanism designer? We first show that in settings where a contribution's value is determined primarily by an agent's expertise, and agents only strategically choose whether to contribute or not, contests can implement optimal outcomes: for any reasonable objective, the rewards for the best and remaining contributions in M_B can always be chosen so that the outcome in the unique symmetric equilibrium of M_B maximizes the mechanism designer's utility. We also show how the mechanism designer can learn these optimal rewards when she does not know the parameters of the agents' utilities, as might be the case in practice. We next consider settings where a contribution's value depends on both the contributor's expertise as well as her effort, and agents endogenously choose how much effort to exert in addition to deciding whether to contribute. Here, we show that optimal outcomes can never be implemented by contests if the system can rank the qualities of contributions perfectly. However, if there is noise in the contributions' rankings, then the mechanism designer can again induce agents to follow strategies that maximize his utility. Thus imperfect rankings can actually help achieve implementability of optimal outcomes when effort is endogenous and influences quality.",
"An increasingly common feature of online communities and social media sites is a mechanism for rewarding user achievements based on a system of badges. Badges are given to users for particular contributions to a site, such as performing a certain number of actions of a given type. They have been employed in many domains, including news sites like the Huffington Post, educational sites like Khan Academy, and knowledge-creation sites like Wikipedia and Stack Overflow. At the most basic level, badges serve as a summary of a user's key accomplishments; however, experience with these sites also shows that users will put in non-trivial amounts of work to achieve particular badges, and as such, badges can act as powerful incentives. Thus far, however, the incentive structures created by badges have not been well understood, making it difficult to deploy badges with an eye toward the incentives they are likely to create. In this paper, we study how badges can influence and steer user behavior on a site---leading both to increased participation and to changes in the mix of activities a user pursues on the site. We introduce a formal model for reasoning about user behavior in the presence of badges, and in particular for analyzing the ways in which badges can steer users to change their behavior. To evaluate the main predictions of our model, we study the use of badges and their effects on the widely used Stack Overflow question-answering site, and find evidence that their badges steer behavior in ways closely consistent with the predictions of our model. Finally, we investigate the problem of how to optimally place badges in order to induce particular user behaviors. Several robust design principles emerge from our framework that could potentially aid in the design of incentives for a broad range of sites."
]
} |
1607.00570 | 2462891382 | We create text representations by weighing word embeddings using idf information.A novel median-based loss is designed to mitigate the negative effect of outliers.A dataset of semantically related textual pairs from Wikipedia and Twitter is made.Our method outperforms all word embedding baselines in a semantic similarity task.Our method is out-of-the-box and thus requires no retraining in different contexts. Short text messages such as tweets are very noisy and sparse in their use of vocabulary. Traditional textual representations, such as tf-idf, have difficulty grasping the semantic meaning of such texts, which is important in applications such as event detection, opinion mining, news recommendation, etc. We constructed a method based on semantic word embeddings and frequency information to arrive at low-dimensional representations for short texts designed to capture semantic similarity. For this purpose we designed a weight-based model and a learning procedure based on a novel median-based loss function. This paper discusses the details of our model and the optimization methods, together with the experimental results on both Wikipedia and Twitter data. We find that our method outperforms the baseline approaches in the experiments, and that it generalizes well on different word embeddings without retraining. Our method is therefore capable of retaining most of the semantic information in the text, and is applicable out-of-the-box. | In this work we use so-called word embeddings as a basic building block to construct text representations. Such an embedding is a distributed vector representation of a single word in a fixed-dimensional semantic space, as opposed to term tf-idf vectors, in which a word is represented by a one-hot vector ( @cite_6 @cite_25 ). A word's term frequency (tf) is the number of times the word occurs in the considered document, and a word's document frequency (df) is the number of documents in the considered corpus that contain that word. Its (smoothed) inverse document frequency (idf) is defined as: in which @math is the number of documents in the corpus ( @cite_10 ). A tf-idf-based similarity measure is based on exact word overlap. As texts become smaller in length, however, the probability of having words in common decreases. Furthermore, these measures ignore synonyms and any semantic relatedness between different words, and are prone to negative effects of homonyms. | {
"cite_N": [
"@cite_10",
"@cite_25",
"@cite_6"
],
"mid": [
"1532325895",
"",
"1592081640"
],
"abstract": [
"Class-tested and coherent, this groundbreaking new textbook teaches web-era information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. Written from a computer science perspective by three leading experts in the field, it gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Although originally designed as the primary text for a graduate or advanced undergraduate course in information retrieval, the book will also create a buzz for researchers and professionals alike.",
"",
"The ability to accurately judge the similarity between natural language sentences is critical to the performance of several applications such as text mining, question answering, and text summarization. Given two sentences, an effective similarity measure should be able to determine whether the sentences are semantically equivalent or not, taking into account the variability of natural language expression. That is, the correct similarity judgment should be made even if the sentences do not share similar surface form. In this work, we evaluate fourteen existing text similarity measures which have been used to calculate similarity score between sentences in many text applications. The evaluation is conducted on three different data sets, TREC9 question variants, Microsoft Research paraphrase corpus, and the third recognizing textual entailment data set."
]
} |
1607.00570 | 2462891382 | We create text representations by weighing word embeddings using idf information.A novel median-based loss is designed to mitigate the negative effect of outliers.A dataset of semantically related textual pairs from Wikipedia and Twitter is made.Our method outperforms all word embedding baselines in a semantic similarity task.Our method is out-of-the-box and thus requires no retraining in different contexts. Short text messages such as tweets are very noisy and sparse in their use of vocabulary. Traditional textual representations, such as tf-idf, have difficulty grasping the semantic meaning of such texts, which is important in applications such as event detection, opinion mining, news recommendation, etc. We constructed a method based on semantic word embeddings and frequency information to arrive at low-dimensional representations for short texts designed to capture semantic similarity. For this purpose we designed a weight-based model and a learning procedure based on a novel median-based loss function. This paper discusses the details of our model and the optimization methods, together with the experimental results on both Wikipedia and Twitter data. We find that our method outperforms the baseline approaches in the experiments, and that it generalizes well on different word embeddings without retraining. Our method is therefore capable of retaining most of the semantic information in the text, and is applicable out-of-the-box. | Paragraph2vec is another method, inspired by the Skip-gram algorithm, to derive sentence vectors ( @cite_12 ). The technique requires the user to train vectors for frequently occurring word groups. The method, however, is not usable in a streaming or on-the-fly fashion, since it requires retraining for unseen word groups at test time. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2949547296"
],
"abstract": [
"Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks."
]
} |
1607.00570 | 2462891382 | We create text representations by weighing word embeddings using idf information.A novel median-based loss is designed to mitigate the negative effect of outliers.A dataset of semantically related textual pairs from Wikipedia and Twitter is made.Our method outperforms all word embedding baselines in a semantic similarity task.Our method is out-of-the-box and thus requires no retraining in different contexts. Short text messages such as tweets are very noisy and sparse in their use of vocabulary. Traditional textual representations, such as tf-idf, have difficulty grasping the semantic meaning of such texts, which is important in applications such as event detection, opinion mining, news recommendation, etc. We constructed a method based on semantic word embeddings and frequency information to arrive at low-dimensional representations for short texts designed to capture semantic similarity. For this purpose we designed a weight-based model and a learning procedure based on a novel median-based loss function. This paper discusses the details of our model and the optimization methods, together with the experimental results on both Wikipedia and Twitter data. We find that our method outperforms the baseline approaches in the experiments, and that it generalizes well on different word embeddings without retraining. Our method is therefore capable of retaining most of the semantic information in the text, and is applicable out-of-the-box. | Related to the concepts of semantic similarity and weighted embedding aggregation, there is extensive literature. @cite_3 calculate a similarity metric between documents based on the travel distance of word embeddings from one document to another one. We on the other hand will derive vectors for the documents themselves. @cite_22 learn semantic features for every sentence in the dataset based on a saliency weighted network for which the BM25 algorithm is used. However, the features are being learned for every sentence prior to test time, and therefore not applicable in a real-time streaming context. Finally, @cite_2 calculate a cosine similarity matrix between the words of two sentences that are sorted based on their idf value, which they use as a feature vector for an MLP. Their approach is similar to our work in the sense that the authors use idf information to rescale term contribution. Their primary goal, however, is calculating semantic similarity instead of learning a sentence representation. In fact, the authors totally discard the original word embeddings and only use the calculated cosine similarity features. | {
"cite_N": [
"@cite_22",
"@cite_3",
"@cite_2"
],
"mid": [
"2028742638",
"658020064",
"185445000"
],
"abstract": [
"Determining semantic similarity between texts is important in many tasks in information retrieval such as search, query suggestion, automatic summarization and image finding. Many approaches have been suggested, based on lexical matching, handcrafted patterns, syntactic parse trees, external sources of structured semantic knowledge and distributional semantics. However, lexical features, like string matching, do not capture semantic similarity beyond a trivial level. Furthermore, handcrafted patterns and external sources of structured semantic knowledge cannot be assumed to be available in all circumstances and for all domains. Lastly, approaches depending on parse trees are restricted to syntactically well-formed texts, typically of one sentence in length. We investigate whether determining short text similarity is possible using only semantic features---where by semantic we mean, pertaining to a representation of meaning---rather than relying on similarity in lexical or syntactic representations. We use word embeddings, vector representations of terms, computed from unlabelled data, that represent terms in a semantic space in which proximity of vectors can be interpreted as semantic similarity. We propose to go from word-level to text-level semantics by combining insights from methods based on external sources of semantic knowledge with word embeddings. A novel feature of our approach is that an arbitrary number of word embedding sets can be incorporated. We derive multiple types of meta-features from the comparison of the word vectors for short text pairs, and from the vector means of their respective word embeddings. The features representing labelled short text pairs are used to train a supervised learning algorithm. We use the trained model at testing time to predict the semantic similarity of new, unlabelled pairs of short texts We show on a publicly available evaluation set commonly used for the task of semantic similarity that our method outperforms baseline methods that work under the same conditions.",
"We present the Word Mover's Distance (WMD), a novel distance function between text documents. Our work is based on recent results in word embeddings that learn semantically meaningful representations for words from local cooccurrences in sentences. The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to \"travel\" to reach the embedded words of another document. We show that this distance metric can be cast as an instance of the Earth Mover's Distance, a well studied transportation problem for which several highly efficient solvers have been developed. Our metric has no hyperparameters and is straight-forward to implement. Further, we demonstrate on eight real world document classification data sets, in comparison with seven state-of-the-art baselines, that the WMD metric leads to unprecedented low k-nearest neighbor document classification error rates.",
"Semantic matching is widely used in many natural language processing tasks. In this paper, we focus on the semantic matching between short texts and design a model to generate deep features, which describe the semantic relevance between short “text object”. Furthermore, we design a method to combine shallow features of short texts (i.e., LSI, VSM and some other handcraft features) with deep features of short texts (i.e., word embedding matching of short text). Finally, a ranking model (i.e., RankSVM) is used to make the final judgment. In order to evaluate our method, we implement our method on the task of matching posts and responses. Results of experiments show that our method achieves the state-of-the-art performance by using shallow features and deep features."
]
} |
1607.00137 | 2460464610 | Face images captured in heterogeneous environments, e.g., sketches generated by the artists or composite-generation software, photos taken by common cameras and infrared images captured by corresponding infrared imaging devices, usually subject to large texture (i.e., style) differences. This results in heavily degraded performance of conventional face recognition methods in comparison with the performance on images captured in homogeneous environments. In this paper, we propose a novel sparse graphical representation based discriminant analysis (SGR-DA) approach to address aforementioned face recognition in heterogeneous scenarios. An adaptive sparse graphical representation scheme is designed to represent heterogeneous face images, where a Markov networks model is constructed to generate adaptive sparse vectors. To handle the complex facial structure and further improve the discriminability, a spatial partition-based discriminant analysis framework is presented to refine the adaptive sparse vectors for face matching. We conducted experiments on six commonly used heterogeneous face datasets and experimental results illustrate that our proposed SGR-DA approach achieves superior performance in comparison with state-of-the-art methods. | Common subspace projection-based methods attempt to project heterogeneous face images into a latent subspace where the heterogeneity is minimized. It began with @cite_4 through a common discriminant feature extraction (CDFE) approach. @cite_8 proposed to use the correlational regression method (canonical correlation analysis) to map NIR and VIS images into a common feature space. @cite_11 proposed a coupled spectral regression (CSR) based method for NIR-VIS matching, which was later improved by learning mappings from both modalities @cite_36 . The partial least squares (PLS) algorithm was exploited by @cite_9 to learn the linear mapping transformations between face images in different modalities. @cite_32 took both the positive and negative constraints during metric learning process into consideration, and proposed a cross modal metric learning (CMML) method for heterogeneous face matching. A multi-view discriminant analysis (MvDA) method was proposed by @cite_13 , which exploited both inter-view and intra-view correlations of heterogeneous face images. Inspired by the unsupervised deep learning algorithms, @cite_33 utilized Restricted Boltzmann Machines to learn a shared representation for HFR. | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_8",
"@cite_36",
"@cite_9",
"@cite_32",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"2953381100",
"1500607227",
"1989341357",
"",
"2612133093",
"",
"2154671648"
],
"abstract": [
"",
"After intensive research, heterogenous face recognition is still a challenging problem. The main difficulties are owing to the complex relationship between heterogenous face image spaces. The heterogeneity is always tightly coupled with other variations, which makes the relationship of heterogenous face images highly nonlinear. Many excellent methods have been proposed to model the nonlinear relationship, but they apt to overfit to the training set, due to limited samples. Inspired by the unsupervised algorithms in deep learning, this paper proposes an novel framework for heterogeneous face recognition. We first extract Gabor features at some localized facial points, and then use Restricted Boltzmann Machines (RBMs) to learn a shared representation locally to remove the heterogeneity around each facial point. Finally, the shared representations of local RBMs are connected together and processed by PCA. Two problems (Sketch-Photo and NIR-VIS) and three databases are selected to evaluate the proposed method. For Sketch-Photo problem, we obtain perfect results on the CUFS database. For NIR-VIS problem, we produce new state-of-the-art performance on the CASIA HFB and NIR-VIS 2.0 databases.",
"In many applications, such as E-Passport and driver's license, the enrollment of face templates is done using visible light (VIS) face images. Such images are normally acquired in controlled environment where the lighting is approximately frontal. However, Authentication is done in variable lighting conditions. Matching of faces in VIS images taken in different lighting conditions is still a big challenge. A recent development in near infrared (NIR) image based face recognition [1] has well overcome the difficulty arising from lighting changes. However, it requires that enrollment face images be acquired using NIR as well. In this paper, we present a new problem, that of matching a face in an NIR image against one in a VIS images, and propose a solution to it. The work is aimed to develop a new solution for meeting the accuracy requirement of face-based biometric recognition, by taking advantages of the recent NIR face technology while allowing the use of existing VIS face photos as gallery templates. Face recognition is done by matching an NIR probe face against a VIS gallery face. Based on an analysis of properties of NIR and VIS face images, we propose a learning-based approach for the different modality matching. A mechanism of correlation between NIR and VIS faces is learned from NIR → VIS face pairs, and the learned correlation is used to evaluate similarity between an NIR face and a VIS face. We provide preliminary results of NIR → VIS face matching for recognition under different illumination conditions. The results demonstrate advantages of NIR → VIS matching over VIS → VIS matching.",
"Coupled spectral regression (CSR) is an effective framework for heterogeneous face recognition (e.g., visual light (VIS) vs. near infrared (NIR)). CSR aims to learn different projections for different face modalities respectively to find a common subspace where the samples of different modalities from the same class are as close as possible. In original CSR, the projection for one modality is supposed to be represented by the data from the same modality. In this paper, we show that not only the samples of the same modality, but also all samples from different modalities are useful to learn the projection. Based on this assumption, we propose an improved coupled spectral regression (ICSR) approach which assumes the projections are linearly represented by all samples. Moreover, in order to improve the generalization capability, the locality information among samples is considered during the ICSR learning. Experiments on PIE, Multi-PIE and CASIA-HFB face database show that the proposed ICSR enhances the heterogeneous face recognition performance compared with the original CSR and validates the effectiveness of the proposed method.",
"",
"This paper proposes a new approach for Cross Modal Matching, i.e. the matching of patterns represented in di erent modalities, when pairs of same di erent data are available for training (e.g. faces of same di erent persons). In this situation, standard approaches such as Partial Least Squares (PLS) or Canonical Correlation Analysis (CCA), map the data into a common latent space that maximizes the covariance, using the information brought by positive pairs only. Our contribution is a new metric learning algorithm, which alleviates this limitation by considering both positive and negative constraints and use them effi ciently to learn a discriminative latent space. The contribution is validated on several datasets for which the proposed approach consistently outperforms PLS CCA as well as more recent discriminative approaches.",
"",
"Face recognition algorithms need to deal with variable lighting conditions. Near infrared (NIR) image based face recognition technology has been proposed to effectively overcome this difficulty. However, it requires that the enrolled face images be captured using NIR images whereas many applications require visual (VIS) images for enrollment templates. To take advantage of NIR face images for illumination-invariant face recognition and allow the use of VIS face images for enrollment, we encounter a new face image pattern recognition problem, that is, heterogeneous face matching between NIR versus VIS faces. In this paper, we present a subspace learning framework named Coupled Spectral Regression (CSR) to solve this challenge problem of coupling the two types of face images and matching between them. CSR first models the properties of different types of data separately and then learns two associated projections to project heterogeneous data (e.g. VIS and NIR) respectively into a discriminative common subspace in which classification is finally performed. Compared to other existing methods, CSR is computational efficient, benefiting from the efficiency of spectral regression and has better generalization performance. Experimental results on VIS-NIR face database show that the proposed CSR method significantly outperforms the existing methods."
]
} |
1607.00198 | 2467585580 | Most state of the art approaches for Named Entity Recognition rely on hand crafted features and annotated corpora. Recently Neural network based models have been proposed which do not require handcrafted features but still require annotated corpora. However, such annotated corpora may not be available for many languages. In this paper, we propose a neural network based model which allows sharing the decoder as well as word and character level parameters between two languages thereby allowing a resource fortunate language to aid a resource deprived language. Specifically, we focus on the case when limited annotated corpora is available in one language ( @math ) and abundant annotated corpora is available in another language ( @math ). Sharing the network architecture and parameters between @math and @math leads to improved performance in @math . Further, our approach does not require any hand crafted features but instead directly learns meaningful feature representations from the training data itself. We experiment with 4 language pairs and show that indeed in a resource constrained setup (lesser annotated corpora), a model jointly trained with data from another language performs better than a model trained only on the limited corpora in one language. | This idea was further developed by @cite_11 @cite_27 to include character level information in addition to word level information. They used Convolutional Neural Networks (CNNs) with fixed filter width to extract relevant character level information. The combined character features and word embeddings were fed to a time delay neural network as in and used for Spanish and Portuguese NER. | {
"cite_N": [
"@cite_27",
"@cite_11"
],
"mid": [
"1951325712",
"2101609803"
],
"abstract": [
"Most state-of-the-art named entity recognition (NER) systems rely on handcrafted features and on the output of other NLP tasks such as part-of-speech (POS) tagging and text chunking. In this work we propose a language-independent NER system that uses automatically learned features only. Our approach is based on the CharWNN deep neural network, which uses word-level and character-level representations (embeddings) to perform sequential classification. We perform an extensive number of experiments using two annotated corpora in two different languages: HAREM I corpus, which contains texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in Spanish. Our experimental results shade light on the contribution of neural character embeddings for NER. Moreover, we demonstrate that the same neural network which has been successfully applied to POS tagging can also achieve state-of-the-art results for language-independet NER, using the same hyperparameters, and without any handcrafted features. For the HAREM I corpus, CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score for the total scenario (ten NE classes), and by 7.2 points in the F1 for the selective scenario (five NE classes).",
"Distributed word representations have recently been proven to be an invaluable resource for NLP. These representations are normally learned using neural networks and capture syntactic and semantic information about words. Information about word morphology and shape is normally ignored when learning word representations. However, for tasks like part-of-speech tagging, intra-word information is extremely useful, specially when dealing with morphologically rich languages. In this paper, we propose a deep neural network that learns character-level representation of words and associate them with usual word representations to perform POS tagging. Using the proposed approach, while avoiding the use of any handcrafted feature, we produce state-of-the-art POS taggers for two languages: English, with 97.32 accuracy on the Penn Treebank WSJ corpus; and Portuguese, with 97.47 accuracy on the Mac-Morpho corpus, where the latter represents an error reduction of 12.2 on the best previous known result."
]
} |
1607.00198 | 2467585580 | Most state of the art approaches for Named Entity Recognition rely on hand crafted features and annotated corpora. Recently Neural network based models have been proposed which do not require handcrafted features but still require annotated corpora. However, such annotated corpora may not be available for many languages. In this paper, we propose a neural network based model which allows sharing the decoder as well as word and character level parameters between two languages thereby allowing a resource fortunate language to aid a resource deprived language. Specifically, we focus on the case when limited annotated corpora is available in one language ( @math ) and abundant annotated corpora is available in another language ( @math ). Sharing the network architecture and parameters between @math and @math leads to improved performance in @math . Further, our approach does not require any hand crafted features but instead directly learns meaningful feature representations from the training data itself. We experiment with 4 language pairs and show that indeed in a resource constrained setup (lesser annotated corpora), a model jointly trained with data from another language performs better than a model trained only on the limited corpora in one language. | There are a few works which use Bidirectional Long Short Term Memory (Bi-LSTMs) @cite_12 for encoding word sequence information for sequence tagging. For examples use LSTMs for encoding word sequences and then use CRFs for decoding tag sequences. use a combination of Bi-LSTMs with CNNs for NER. The decoder is still a CRF which was trained to maximize the entire sequence likelihood. Both these approaches also use some handcrafted features. Very recently proposed Hierarchical Bi-LSTMs as an alternative to CNN-Bi-LSTMs wherein they first use a character level Bi-LSTMs followed by a word level Bi-LSTMs, thus forming a hierarchy of LSTMs. They also used CRF at the output layer. The model was tested on English, Spanish, Dutch, and German languages. They reported state-of-the-art results when systems with no handcrafted feature engineering are considered. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2131774270"
],
"abstract": [
"In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported."
]
} |
1607.00198 | 2467585580 | Most state of the art approaches for Named Entity Recognition rely on hand crafted features and annotated corpora. Recently Neural network based models have been proposed which do not require handcrafted features but still require annotated corpora. However, such annotated corpora may not be available for many languages. In this paper, we propose a neural network based model which allows sharing the decoder as well as word and character level parameters between two languages thereby allowing a resource fortunate language to aid a resource deprived language. Specifically, we focus on the case when limited annotated corpora is available in one language ( @math ) and abundant annotated corpora is available in another language ( @math ). Sharing the network architecture and parameters between @math and @math leads to improved performance in @math . Further, our approach does not require any hand crafted features but instead directly learns meaningful feature representations from the training data itself. We experiment with 4 language pairs and show that indeed in a resource constrained setup (lesser annotated corpora), a model jointly trained with data from another language performs better than a model trained only on the limited corpora in one language. | Multilingual training of NER systems was explored dating back to @cite_16 . Usually these systems train a language dependent NER tagger by (i) enforcing tag constraints along the aligned words in parallel tagged corpora @cite_1 @cite_3 or untagged parallel corpus @cite_21 @cite_25 @cite_25 @cite_15 and or (ii) use cross-lingual features @cite_3 @cite_2 @cite_8 . | {
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_25"
],
"mid": [
"2251435463",
"800621058",
"2119593077",
"2047995599",
"1818534184",
"2141678623",
"2143345705",
"2951784689"
],
"abstract": [
"Different languages contain complementary cues about entities, which can be used to improve Named Entity Recognition (NER) systems. We propose a method that formulates the problem of exploring such signals on unannotated bilingual text as a simple Integer Linear Program, which encourages entity tags to agree via bilingual constraints. Bilingual NER experiments on the large OntoNotes 4.0 Chinese-English corpus show that the proposed method can improve strong baselines for both Chinese and English. In particular, Chinese performance improves by over 5 absolute F1 score. We can then annotate a large amount of bilingual text (80k sentence pairs) using our method, and add it as uptraining data to the original monolingual NER training corpus. The Chinese model retrained on this new combined dataset outperforms the strong baseline by over 3 F1 score.",
"Most semi-supervised methods in Natural Language Processing capitalize on unannotated resources in a single language; however, information can be gained from using parallel resources in more than one language, since translations of the same utterance in different languages can help to disambiguate each other. We demonstrate a method that makes effective use of vast amounts of bilingual text (a.k.a. bitext) to improve monolingual systems. We propose a factored probabilistic sequence model that encourages both cross-language and intra-document consistency. A simple Gibbs sampling algorithm is introduced for performing approximate inference. Experiments on English-Chinese Named Entity Recognition (NER) using the OntoNotes dataset demonstrate that our method is significantly more accurate than state-of-the-art monolingual CRF models in a bilingual test setting. Our model also improves on previous work by (2010), achieving a relative error reduction of 10.8 and 4.5 in Chinese and English, respectively. Furthermore, by annotating a moderate amount of unlabeled bi-text with our bilingual model, and using the tagged data for uptraining, we achieve a 9.2 error reduction in Chinese over the state-of-the-art Stanford monolingual NER system.",
"We observe that (1) how a given named entity (NE) is translated (i.e., either semantically or phonetically) depends greatly on its associated entity type, and (2) entities within an aligned pair should share the same type. Also, (3) those initially detected NEs are anchors, whose information should be used to give certainty scores when selecting candidates. From this basis, an integrated model is thus proposed in this paper to jointly identify and align bilingual named entities between Chinese and English. It adopts a new mapping type ratio feature (which is the proportion of NE internal tokens that are semantically translated), enforces an entity type consistency constraint, and utilizes additional monolingual candidate certainty factors (based on those NE anchors). The experiments show that this novel approach has substantially raised the type-sensitive F-score of identified NE-pairs from 68.4 to 81.7 (42.1 F-score imperfection reduction) in our Chinese-English NE alignment task.",
"Traditional isolated monolingual name taggers tend to yield inconsistent results across two languages. In this paper, we propose two novel approaches to jointly and consistently extract names from parallel corpora. The first approach uses standard linear-chain Conditional Random Fields (CRFs) as the learning framework, incorporating cross-lingual features propagated between two languages. The second approach is based on a joint CRFs model to jointly decode sentence pairs, incorporating bilingual factors based on word alignment. Experiments on Chinese-English parallel corpora demonstrated that the proposed methods significantly outperformed monolingual name taggers, were robust to automatic alignment noise and achieved state-of-the-art performance. With only 20 of the training data, our proposed methods can already achieve better performance compared to the baseline learned from the whole training set.1",
"It has been established that incorporating word cluster features derived from large unlabeled corpora can significantly improve prediction of linguistic structure. While previous work has focused primarily on English, we extend these results to other languages along two dimensions. First, we show that these results hold true for a number of languages across families. Second, and more interestingly, we provide an algorithm for inducing cross-lingual clusters and we show that features derived from these clusters significantly improve the accuracy of cross-lingual structure prediction. Specifically, we show that by augmenting direct-transfer systems with cross-lingual cluster features, the relative error of delexicalized dependency parsers, trained on English treebanks and transferred to foreign languages, can be reduced by up to 13 . When applying the same method to direct transfer of named-entity recognizers, we observe relative improvements of up to 26 .",
"Translated bi-texts contain complementary language cues, and previous work on Named Entity Recognition (NER) has demonstrated improvements in performance over monolingual taggers by promoting agreement of tagging decisions between the two languages. However, most previous approaches to bilingual tagging assume word alignments are given as fixed input, which can cause cascading errors. We observe that NER label information can be used to correct alignment mistakes, and present a graphical model that performs bilingual NER tagging jointly with word alignment, by combining two monolingual tagging models with two unidirectional alignment models. We introduce additional cross-lingual edge factors that encourage agreements between tagging and alignment decisions. We design a dual decomposition inference algorithm to perform joint decoding over the combined alignment and NER output space. Experiments on the OntoNotes dataset demonstrate that our method yields significant improvements in both NER and word alignment over state-of-the-art monolingual baselines.",
"Named entities create serious problems for state-of-the-art commercial machine translation (MT) systems and often cause translation failures beyond the local context, affecting both the overall morphosyntactic well-formedness of sentences and word sense disambiguation in the source text. We report on the results of an experiment in which MT input was processed using output from the named entity recognition module of Sheffield's GATE information extraction (IE) system. The gain in MT quality indicates that specific components of IE technology could boost the performance of current MT systems.",
"We consider a multilingual weakly supervised learning scenario where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide the learning in other languages. Past approaches project labels across bitext and use them as features or gold labels for training. We propose a new method that projects model expectations rather than labels, which facilities transfer of model uncertainty across language boundaries. We encode expectations as constraints and train a discriminative CRF model using Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on standard Chinese-English and German-English NER datasets, our method demonstrates F1 scores of 64 and 60 when no labeled data is used. Attaining the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences. Furthermore, when combined with labeled examples, our method yields significant improvements over state-of-the-art supervised methods, achieving best reported numbers to date on Chinese OntoNotes and German CoNLL-03 datasets."
]
} |
1607.00354 | 2951731843 | Affordances have been introduced in literature as action opportunities that objects offer, and used in robotics to semantically represent their interconnection. However, when considering an environment instead of an object, the problem becomes more complex due to the dynamism of its state. To tackle this issue, we introduce the concept of Spatio-Temporal Affordances (STA) and Spatio-Temporal Affordance Map (STAM). Using this formalism, we encode action semantics related to the environment to improve task execution capabilities of an autonomous robot. We experimentally validate our approach to support the execution of robot tasks by showing that affordances encode accurate semantics of the environment. | All the aforementioned contributions exploit spatial affordances to model a unique task and to improve robot skills in performing that specific task. In this work, we want to introduce a general architecture that provides the possibility to model different types of spatial affordances simultaneously. To this end, we consider the remarkable contribution of @cite_6 . The authors propose a layered costmap to encode different features of the environment in order to support navigation. Their architecture enables to formalize each layer independently, which is beneficial in the development of robotic systems. We borrow such paradigm and propose a modular approach in representing affordances. Additionally, we generalize our framework by not forcing our system to only represent navigability tasks. As shown in , we propose a system to semantically annotate the space of the environment in order to support manifold high-level tasks -- of which navigability is just an instance. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2021096542"
],
"abstract": [
"Many navigation systems, including the ubiquitous ROS navigation stack, perform path-planning on a single costmap, in which the majority of information is stored in a single grid. This approach is quite successful at generating collision-free paths of minimal length, but it can struggle in dynamic, people-filled environments when the values in the costmap expand beyond occupied or free space. We have created and implemented a new method called layered costmaps, which work by separating the processing of costmap data into semantically-separated layers. Each layer tracks one type of obstacle or constraint, and then modifies a master costmap which is used for the path planning. We show how the algorithm can be integrated with the open-source ROS navigation stack, and how our approach is easier to fine-tune to specific environmental contexts than the existing monolithic one. Our design also results in faster path planning in practical use, and exhibits a cleaner separation of concerns that the original architecture. The new algorithm also makes it possible to represent complex cost values in order to create navigation behavior for a wide range of contexts."
]
} |
1607.00291 | 2951081301 | Tensor computations--in particular tensor contraction (TC)--are important kernels in many scientific computing applications. Due to the fundamental similarity of TC to matrix multiplication (MM) and to the availability of optimized implementations such as the BLAS, tensor operations have traditionally been implemented in terms of BLAS operations, incurring both a performance and a storage overhead. Instead, we implement TC using the flexible BLIS framework, which allows for transposition (reshaping) of the tensor to be fused with internal partitioning and packing operations, requiring no explicit transposition operations or additional workspace. This implementation, TBLIS, achieves performance approaching that of MM, and in some cases considerably higher than that of traditional TC. Our implementation supports multithreading using an approach identical to that used for MM in BLIS, with similar performance characteristics. The complexity of managing tensor-to-matrix transformations is also handled automatically in our approach, greatly simplifying its use in scientific applications. | As mentioned previously, there are a variety of tensor-related packages available for popular programming platforms such as NumPy @cite_29 for Python, the Tensor Toolbox @cite_10 in MATLAB, the template libraries Eigen @cite_14 and Blitz++ @cite_34 in C++, among many others. These libraries provide a simple and intuitive interface for creating and manipulating tensors, for example providing traditional array-style access to individual elements, managing transposition and reshaping, etc. Tensor contraction facilities are provided in many of these libraries, either using explicit (although sometimes compiler-generated) loop-based code, or using the TTDT approach. In some applications a non-high performance library is sufficient, but the deficiencies of the TTDT approach are highly relevant for high-performance code. Additionally, the quality of and interface provided for tensor operations varies significantly from library to library. It is our hope that the high-performance and self-contained (since it does not require large amounts of workspace) implementation provided by BSMTC can provide a standard level of performance and functionality. | {
"cite_N": [
"@cite_29",
"@cite_14",
"@cite_10",
"@cite_34"
],
"mid": [
"2146292423",
"",
"2037271374",
"1581501197"
],
"abstract": [
"In the Python world, NumPy arrays are the standard representation for numerical data and enable efficient implementation of numerical computations in a high-level language. As this effort shows, NumPy performance can be improved through three techniques: vectorizing calculations, avoiding copying data in memory, and minimizing operation counts.",
"",
"Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensorlaslmatrix class supports the “matricization” of a tensor, that is, the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cpltensor and tuckerltensor. We describe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.",
"The Blitz++ library provides numeric arrays for C++ with efficiency that rivals Fortran, without any language extensions. Blitz++ has features unavailable in Fortran 90 95, such as arbitrary transpose operations, array renaming, tensor notation, partial reductions, multi-component arrays and stencil operators. The library handles parsing and analysis of array expressions on its own using the expression templates technique, and performs optimizations (such as loop transformations) which have until now been the responsibility of compilers."
]
} |
1607.00291 | 2951081301 | Tensor computations--in particular tensor contraction (TC)--are important kernels in many scientific computing applications. Due to the fundamental similarity of TC to matrix multiplication (MM) and to the availability of optimized implementations such as the BLAS, tensor operations have traditionally been implemented in terms of BLAS operations, incurring both a performance and a storage overhead. Instead, we implement TC using the flexible BLIS framework, which allows for transposition (reshaping) of the tensor to be fused with internal partitioning and packing operations, requiring no explicit transposition operations or additional workspace. This implementation, TBLIS, achieves performance approaching that of MM, and in some cases considerably higher than that of traditional TC. Our implementation supports multithreading using an approach identical to that used for MM in BLIS, with similar performance characteristics. The complexity of managing tensor-to-matrix transformations is also handled automatically in our approach, greatly simplifying its use in scientific applications. | Within specific scientific domains, high-performance libraries have appeared that implement the TTDT approach. For example, in quantum chemistry there are software packages such as the Tensor Contraction Engine (TCE) @cite_7 , Cyclops Tensor Framework @cite_5 , libtensor @cite_35 , and TiledArray @cite_13 @cite_1 that provide general tensor and in some cases specific quantum chemistry-related functionality. Many of these libraries could benefit directly from a native tensor contraction kernel since they focus primarily on distributed-memory algorithms and tensor blocking for algorithmic and space efficiency. Other approaches such as Direct Product Decomposition (DPD) packing @cite_3 are specifically focused on improving the efficiency of the TTDT approach, but could also be used on top of the BSMTC algorithm. | {
"cite_N": [
"@cite_35",
"@cite_7",
"@cite_1",
"@cite_3",
"@cite_5",
"@cite_13"
],
"mid": [
"2143119605",
"2093135480",
"872462367",
"2026858719",
"",
"2200200690"
],
"abstract": [
"This article presents an open-source object-oriented C11 library of classes and routines to perform tensor algebra. The primary purpose of the library is to enable postHartree–Fock electronic structure methods; however, the code is general enough to be applicable in other areas of physical and computational sciences. The library supports tensors of arbitrary order (dimensionality), size, and symmetry. Implemented data structures and algorithms operate on large tensors by splitting them into smaller blocks, storing them both in core memory and in files on disk, and applying divide-and-conquer-type parallel algorithms to perform tensor algebra. The library offers a set of general tensor symmetry algorithms and a full implementation of tensor symmetries typically found in electronic structure theory: permutational, spin, and molecular point group symmetry. The Q-Chem electronic structure software uses this library to drive coupled-cluster, equation-of-motion, and algebraicdiagrammatic construction methods. V C 2013 Wiley Periodicals, Inc.",
"We have developed a symbolic manipulation program and program generator (tensor contraction engine or TCE) that abstracts and automates the time-consuming, error-prone processes of deriving the working equations of a well-defined model of second-quantized many-electron theories and synthesizing efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted upon by permutation operators. It subsequently determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes common binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an o...",
"Distributed-memory matrix multiplication (MM) is a key element of algorithms in many domains (machine learning, quantum physics). Conventional algorithms for dense MM rely on regular uniform data decomposition to ensure load balance. These traits conflict with the irregular structure (block-sparse or rank-sparse within blocks) that is increasingly relevant for fast methods in quantum physics. To deal with such irregular data we present a new MM algorithm based on Scalable Universal Matrix Multiplication Algorithm (SUMMA). The novel features are: (1) multiple-issue scheduling of SUMMA iterations, and (2) fine-grained task-based formulation. The latter eliminates the need for explicit internodal synchronization; with multiple-iteration scheduling this allows load imbalance due to nonuniform matrix structure. For square MM with uniform and nonuniform block sizes (the latter simulates matrices with general irregular structure) we found excellent performance in weak and strong-scaling regimes, on commodity and high-end hardware.",
"An analysis of the matrix contractions involved in many‐body perturbation theory and coupled‐cluster calculations leads to a convenient strategy for exploiting point group symmetry, by which the number of floating point operations can be reduced by as much as a factor of h2, where h is the order of the molecular point group. Contrary to a statement in the literature, the significant reduction in computation time realized in coupled‐cluster calculations which exploit symmetry is not due to nonlinearities in the equations. Rather, the savings of the fully vectorizable direct product decomposition (DPD) method outlined here is associated with individual (linear) contractions, and is therefore applicable to both linear and nonlinear coupled‐cluster models, as well as many body perturbation theory. In addition to the large reduction in floating point operations made possible by exploiting symmetry, core memory requirements are also reduced by a factor of ≊h2. Implementation of the method for both open and clos...",
"",
"A task-based formulation of Scalable Universal Matrix Multiplication Algorithm (SUMMA), a popular algorithm for matrix multiplication (MM), is applied to the multiplication of hierarchy-free, rank-structured matrices that appear in the domain of quantum chemistry (QC). The novel features of our formulation are: (1) concurrent scheduling of multiple SUMMA iterations, and (2) fine-grained task-based composition. These features make it tolerant of the load imbalance due to the irregular matrix structure and eliminate all artifactual sources of global synchronization. Scalability of iterative computation of square-root inverse of block-rank-sparse QC matrices is demonstrated; for full-rank (dense) matrices the performance of our SUMMA formulation usually exceeds that of the state-of-the-art dense MM implementations (ScaLAPACK and Cyclops Tensor Framework)."
]
} |
1607.00291 | 2951081301 | Tensor computations--in particular tensor contraction (TC)--are important kernels in many scientific computing applications. Due to the fundamental similarity of TC to matrix multiplication (MM) and to the availability of optimized implementations such as the BLAS, tensor operations have traditionally been implemented in terms of BLAS operations, incurring both a performance and a storage overhead. Instead, we implement TC using the flexible BLIS framework, which allows for transposition (reshaping) of the tensor to be fused with internal partitioning and packing operations, requiring no explicit transposition operations or additional workspace. This implementation, TBLIS, achieves performance approaching that of MM, and in some cases considerably higher than that of traditional TC. Our implementation supports multithreading using an approach identical to that used for MM in BLIS, with similar performance characteristics. The complexity of managing tensor-to-matrix transformations is also handled automatically in our approach, greatly simplifying its use in scientific applications. | Other research has focused on improving the efficiency of the TTDT approach through optimization of the tensor transposition step (and other associated operations in quantum chemistry). Explicit searches of the space of tensor transpose algorithms along with code generation techniques has been used to generate high-performance tensor transpose kernels @cite_6 . Tensor transposition along with handling of index permutation symmetry in the TTDT approach has been addressed specifically in the chemistry community @cite_26 @cite_8 @cite_31 . | {
"cite_N": [
"@cite_31",
"@cite_26",
"@cite_6",
"@cite_8"
],
"mid": [
"",
"1993079359",
"2295543477",
"2004182761"
],
"abstract": [
"",
"Complex tensor contraction expressions arise in accurate electronic structure models in quantum chemistry, such as the coupled cluster method. This paper addresses two complementary aspects of performance optimization of such tensor contraction expressions. Transformations using algebraic properties of commutativity and associativity can be used to significantly decrease the number of arithmetic operations required for evaluation of these expressions. The identification of common subexpressions among a set of tensor contraction expressions can result in a reduction of the total number of operations required to evaluate the tensor contractions. The first part of the paper describes an effective algorithm for operation minimization with common subexpression identification and demonstrates its effectiveness on tensor contraction expressions for coupled cluster equations. The second part of the paper highlights the importance of data layout transformation in the optimization of tensor contraction computations on modern processors. A number of considerations such as minimization of cache misses and utilization of multimedia vector instructions are discussed. A library for efficient index permutation of multi-dimensional tensors is described and experimental performance data is provided that demonstrates its effectiveness.",
"We present TTC, an open-source parallel compiler for multidimensional tensor transpositions. In order to generate high-performance C++ code, TTC explores a number of optimizations, including software prefetching, blocking, loop-reordering, and explicit vectorization. To evaluate the performance of multidimensional transpositions across a range of possible use-cases, we also release a benchmark covering arbitrary transpositions of up to six dimensions. Performance results show that the routines generated by TTC achieve close to peak memory bandwidth on both the Intel Haswell and the AMD Steamroller architectures, and yield significant performance gains over modern compilers. By implementing a set of pruning heuristics, TTC allows users to limit the number of potential solutions; this option is especially useful when dealing with high-dimensional tensors, as the search space might become prohibitively large. Experiments indicate that when only 100 potential solutions are considered, the resulting performance is about 99 of that achieved with exhaustive search.",
"In this paper, we present an efficient implementation of general tensor contractions, which is part of a new coupled-cluster program. The tensor contractions, used to evaluate the residuals in each coupled-cluster iteration are particularly important for the performance of the program. We developed a generic procedure, which carries out contractions of two tensors irrespective of their explicit structure. It can handle coupled-cluster-type expressions of arbitrary excitation level. To make the contraction efficient without loosing flexibility, we use a three-step procedure. First, the data contained in the tensors are rearranged into matrices, then a matrix-matrix multiplication is performed, and finally the result is backtransformed to a tensor. The current implementation is significantly more efficient than previous ones capable of treating arbitrary high excitations."
]
} |
1607.00036 | 2470713034 | We extend neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing a trainable memory addressing scheme. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both continuous, differentiable and discrete, non-differentiable read write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM baselines. We have done extensive analysis of our model and different variations of NTM on bAbI task. We also provide further experimental results on sequential pMNIST, Stanford Natural Language Inference, associative recall and copy tasks. | In addition to the finite random access memory of the neural Turing machine, based on which the D-NTM is designed, other data structures have been proposed as external memory for neural networks. In , a continuous, differentiable stack was proposed. In , grid and tape storage are used. These approaches differ from the NTM in that their memory is unbounded and can grow indefinitely. On the other hand, they are often not randomly accessible. @cite_0 proposed a variation of NTM that has a structured memory and they have shown experiments on copy and associative recall tasks with this model. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2211729040"
],
"abstract": [
"Neural Turing Machines (NTM) contain memory component that simulates \"working memory\" in the brain to store and retrieve information to ease simple algorithms learning. So far, only linearly organized memory is proposed, and during experiments, we observed that the model does not always converge, and overfits easily when handling certain tasks. We think memory component is key to some faulty behaviors of NTM, and better organization of memory component could help fight those problems. In this paper, we propose several different structures of memory for NTM, and we proved in experiments that two of our proposed structured-memory NTMs could lead to better convergence, in term of speed and prediction accuracy on copy task and associative recall task as in ( 2014)."
]
} |
1607.00036 | 2470713034 | We extend neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing a trainable memory addressing scheme. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both continuous, differentiable and discrete, non-differentiable read write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM baselines. We have done extensive analysis of our model and different variations of NTM on bAbI task. We also provide further experimental results on sequential pMNIST, Stanford Natural Language Inference, associative recall and copy tasks. | Memory networks form another family of neural networks with external memory. In this class of neural networks, information is stored explicitly as it is (in the form of its continuous representation) in the memory, without being erased or modified during an episode. Memory networks and their variants have been applied to various tasks successfully . @cite_4 have also independently proposed the idea of having separate key and value vectors for memory networks. A similar addressing mechanism is also explored in in the context of learning program traces. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2293453011"
],
"abstract": [
"Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering. One such architecture, the dynamic memory network (DMN), obtained high accuracy on a variety of language tasks. However, it was not shown whether the architecture achieves strong results for question answering when supporting facts are not marked during training or whether it could be applied to other modalities such as images. Based on an analysis of the DMN, we propose several improvements to its memory and input modules. Together with these changes we introduce a novel input module for images in order to be able to answer visual questions. Our new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the -10k text question-answering dataset without supporting fact supervision."
]
} |
1607.00110 | 2473983412 | Gradient boosting of regression trees is a competitive procedure for learning predictive models of continuous data that fits the data with an additive non-parametric model. The classic version of gradient boosting assumes that the data is independent and identically distributed. However, relational data with interdependent, linked instances is now common and the dependencies in such data can be exploited to improve predictive performance. Collective inference is one approach to exploit relational correlation patterns and significantly reduce classification error. However, much of the work on collective learning and inference has focused on discrete prediction tasks rather than continuous. target values has not got that attention in terms of collective inference. In this work, we investigate how to combine these two paradigms together to improve regression in relational domains. Specifically, we propose a boosting algorithm for learning a collective inference model that predicts a continuous target variable. In the algorithm, we learn a basic relational model, collectively infer the target values, and then iteratively learn relational models to predict the residuals. We evaluate our proposed algorithm on a real network dataset and show that it outperforms alternative boosting methods. However, our investigation also revealed that the relational features interact together to produce better predictions. | Boosting for statistical relational learning (SRL) has been studied in previous work such as @cite_11 and @cite_13 . uses gradient boosting to learn relational and non-parametric functions that approximate a joint probability distribution over multiple random variables in relational dependency networks (RDNs). The algorithm learns the structure and parameters of the RDN models simultaneously. Since RDNs can be used for collective classification, include other query predicates in the training data while learning the model for the current query and then apply the learned mode collectively. The authors compare the boosted RDNs (RDN-B) against Markov logic networks @cite_6 and basic RDNs on two kinds of tasks: entity resolution and information extraction. The performance of RDN-B was significantly better compared to RDN and MLN for most datasets. However, the authors don't report a comparison between using RDN-B with and without collective inference. | {
"cite_N": [
"@cite_13",
"@cite_6",
"@cite_11"
],
"mid": [
"2521045222",
"",
"2150475393"
],
"abstract": [
"Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains where the joint probability distribution over the variables is approximated as a product of conditional distributions. The current learning algorithms for RDNs use pseudolikelihood techniques to learn probability trees for each variable in order to represent the conditional distribution. We propose the use of gradient tree boosting as applied by (2004) to approximate the gradient for each variable. The use of several regression trees, instead of just one, results in an expressive model. Our results in 3 different data sets show that this training method results in efficient learning of RDNs when compared to state-of-the-art approaches to Statistical Relational Learning.",
"",
"Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches."
]
} |
1606.09446 | 2949837904 | With the increasing use of online communication platforms, such as email, twitter, and messaging applications, we are faced with a growing amount of data that combine content (what is said), time (when), and user (by whom) information. An important computational challenge is to analyze these data, discover meaningful patterns, and understand what is happening. We consider the problem of mining online communication data and finding top-k temporal events. We define a temporal event to be a coherent topic that is discussed frequently, in a relatively short time span, while the information ow of the event respects the underlying network structure. We construct our model for detecting temporal events in two steps. We first introduce the notion of interaction meta-graph, which connects associated interactions. Using this notion, we define a temporal event to be a subset of interactions that (i) are topically and temporally close and (ii) correspond to a tree that captures the information ow. Finding the best temporal event leads to budget version of the prize-collecting Steiner-tree (PCST) problem, which we solve using three different methods: a greedy approach, a dynamic-programming algorithm, and an adaptation to an existing approximation algorithm. The problem of finding the top- k events among a set of candidate events maps to maximum set-cover problem, and thus, solved by greedy. We compare and analyze our algorithms in both synthetic and real datasets, such as twitter and email communication. The results show that our methods are able to detect meaningful temporal events. | Phrase-based event detection. The problem of detecting events in social media has attracted significant attention. @cite_19 and @cite_20 treat events as short, distinctive phrases that propagate relatively intact through in a network. Their work offers a graph formulation for clustering variants of phrases based on string edit distance. Although their objective is similar to ours, there are significant differences. First, our methods focus on interaction networks, aiming to capture information flows in communication networks, rather than action networks. Second, we explicitly impose topic-coherence constraints, where the edit distance is insufficient for this goal. Third, instead of representing events by phrases, we derive higher-level representation using topic terms. | {
"cite_N": [
"@cite_19",
"@cite_20"
],
"mid": [
"2127492100",
"2112056172"
],
"abstract": [
"Tracking new topics, ideas, and \"memes\" across the Web has been an issue of considerable interest. Recent work has developed methods for tracking topic shifts over long time scales, as well as abrupt spikes in the appearance of particular named entities. However, these approaches are less well suited to the identification of content that spreads widely and then fades over time scales on the order of days - the time scale at which we perceive news and events. We develop a framework for tracking short, distinctive phrases that travel relatively intact through on-line text; developing scalable algorithms for clustering textual variants of such phrases, we identify a broad class of memes that exhibit wide spread and rich variation on a daily basis. As our principal domain of study, we show how such a meme-tracking approach can provide a coherent representation of the news cycle - the daily rhythms in the news media that have long been the subject of qualitative interpretation but have never been captured accurately enough to permit actual quantitative analysis. We tracked 1.6 million mainstream media sites and blogs over a period of three months with the total of 90 million articles and we find a set of novel and persistent temporal patterns in the news cycle. In particular, we observe a typical lag of 2.5 hours between the peaks of attention to a phrase in the news media and in blogs respectively, with divergent behavior around the overall peak and a \"heartbeat\"-like pattern in the handoff between news and blogs. We also develop and analyze a mathematical model for the kinds of temporal variation that the system exhibits.",
"Online content exhibits rich temporal dynamics, and diverse realtime user generated content further intensifies this process. However, temporal patterns by which online content grows and fades over time, and by which different pieces of content compete for attention remain largely unexplored. We study temporal patterns associated with online content and how the content's popularity grows and fades over time. The attention that content receives on the Web varies depending on many factors and occurs on very different time scales and at different resolutions. In order to uncover the temporal dynamics of online content we formulate a time series clustering problem using a similarity metric that is invariant to scaling and shifting. We develop the K-Spectral Centroid (K-SC) clustering algorithm that effectively finds cluster centroids with our similarity measure. By applying an adaptive wavelet-based incremental approach to clustering, we scale K-SC to large data sets. We demonstrate our approach on two massive datasets: a set of 580 million Tweets, and a set of 170 million blog posts and news media articles. We find that K-SC outperforms the K-means clustering algorithm in finding distinct shapes of time series. Our analysis shows that there are six main temporal shapes of attention of online content. We also present a simple model that reliably predicts the shape of attention by using information about only a small number of participants. Our analyses offer insight into common temporal patterns of the content on theWeb and broaden the understanding of the dynamics of human attention."
]
} |
1606.09446 | 2949837904 | With the increasing use of online communication platforms, such as email, twitter, and messaging applications, we are faced with a growing amount of data that combine content (what is said), time (when), and user (by whom) information. An important computational challenge is to analyze these data, discover meaningful patterns, and understand what is happening. We consider the problem of mining online communication data and finding top-k temporal events. We define a temporal event to be a coherent topic that is discussed frequently, in a relatively short time span, while the information ow of the event respects the underlying network structure. We construct our model for detecting temporal events in two steps. We first introduce the notion of interaction meta-graph, which connects associated interactions. Using this notion, we define a temporal event to be a subset of interactions that (i) are topically and temporally close and (ii) correspond to a tree that captures the information ow. Finding the best temporal event leads to budget version of the prize-collecting Steiner-tree (PCST) problem, which we solve using three different methods: a greedy approach, a dynamic-programming algorithm, and an adaptation to an existing approximation algorithm. The problem of finding the top- k events among a set of candidate events maps to maximum set-cover problem, and thus, solved by greedy. We compare and analyze our algorithms in both synthetic and real datasets, such as twitter and email communication. The results show that our methods are able to detect meaningful temporal events. | Text summarization. Text summarization techniques attempt to select a subset of sentences @cite_13 or tweets @cite_11 to summarize textual content. Similarly, we select a subset of interactions under a topic-coherence constraint. However, we also impose temporal coherence constraint, whereas they take a static view. | {
"cite_N": [
"@cite_13",
"@cite_11"
],
"mid": [
"2120543014",
"2535726973"
],
"abstract": [
"Extractive methods for multi-document summarization are mainly governed by information overlap, coherence, and content constraints. We present an unsupervised probabilistic approach to model the hidden abstract concepts across documents as well as the correlation between these concepts, to generate topically coherent and non-redundant summaries. Based on human evaluations our models generate summaries with higher linguistic quality in terms of coherence, readability, and redundancy compared to benchmark systems. Although our system is unsupervised and optimized for topical coherence, we achieve a 44.1 ROUGE on the DUC-07 test set, roughly in the range of state-of-the-art supervised models.",
"Due to the sheer volume of text generated by a micro log site like Twitter, it is often difficult to fully understand what is being said about various topics. In an attempt to understand micro logs better, this paper compares algorithms for extractive summarization of micro log posts. We present two algorithms that produce summaries by selecting several posts from a given set. We evaluate the generated summaries by comparing them to both manually produced summaries and summaries produced by several leading traditional summarization systems. In order to shed light on the special nature of Twitter posts, we include extensive analysis of our results, some of which are unexpected."
]
} |
1606.09610 | 2949782513 | We investigated the feasibility of crowdsourcing full-fledged tutorial videos from ordinary people on the Web on how to solve math problems related to logarithms. This kind of approach (a form of learnersourcing) to efficiently collecting tutorial videos and other learning resources could be useful for realizing personalized learning-at-scale, whereby students receive specific learning resources -- drawn from a large and diverse set -- that are tailored to their individual and time-varying needs. Results of our study, in which we collected 399 videos from 66 unique "teachers" on Mechanical Turk, suggest that (1) approximately 100 videos -- over @math of which are mathematically fully correct -- can be crowdsourced per week for @math 0.105 @math 0.045 @math 0.1416 @math 0.1506$) from watching a popular Khan Academy video on logarithms. | One recently proposed and promising approach to collecting and curating large volumes of educational resources is to data from learners themselves. This process, sometimes known as , has been used, for example, to identify which parts of lecture videos are confusing @cite_24 , and to describe the key instructional steps @cite_21 and subgoals @cite_19 of how-to'' videos. More recently, learnersourcing has been used not only to annotate existing educational content, but also to create novel content itself. In particular, @cite_18 explored a crowdsourcing-based strategy toward personalized learning in which learners were asked to author paragraphs of text explaining how to solve statistics problems. The explanations generated by learners were found to be comparable in both learning benefit and rated quality to explanations produced by expert instructors. | {
"cite_N": [
"@cite_24",
"@cite_19",
"@cite_18",
"@cite_21"
],
"mid": [
"2018206893",
"2077561308",
"2336395599",
"2035818959"
],
"abstract": [
"With thousands of learners watching the same online lecture videos, analyzing video watching patterns provides a unique opportunity to understand how students learn with videos. This paper reports a large-scale analysis of in-video dropout and peaks in viewership and student activity, using second-by-second user interaction data from 862 videos in four Massive Open Online Courses (MOOCs) on edX. We find higher dropout rates in longer videos, re-watching sessions (vs first-time), and tutorials (vs lectures). Peaks in re-watching sessions and play events indicate points of interest and confusion. Results show that tutorials (vs lectures) and re-watching sessions (vs first-time) lead to more frequent and sharper peaks. In attempting to reason why peaks occur by sampling 80 videos, we observe that 61 of the peaks accompany visual transitions in the video, e.g., a slide view to a classroom view. Based on this observation, we identify five student activity patterns that can explain peaks: starting from the beginning of a new material, returning to missed content, following a tutorial step, replaying a brief segment, and repeating a non-visual explanation. Our analysis has design implications for video authoring, editing, and interface design, providing a richer understanding of video learning on MOOCs.",
"Subgoal labeling [1] is a technique known to support learning new knowledge by clustering a group of steps into a higher-level conceptual unit. It has been shown to improve learning by helping learners to form the right mental model. While many learners view video tutorials nowadays, subgoal labels are often not available unless manually provided at production time. This work addresses the challenge of collecting and presenting subgoal labels to a large number of video tutorials. We introduce a mixed-initiative approach to collect subgoal labels in a scalable and efficient manner. The key component of this method is learnersourcing, which channels learners' activities using the video interface into useful input to the system. The presented method will contribute to the broader availability of subgoal labels in how-to videos.",
"While explanations may help people learn by providing information about why an answer is correct, many problems on online platforms lack high-quality explanations. This paper presents AXIS (Adaptive eXplanation Improvement System), a system for obtaining explanations. AXIS asks learners to generate, revise, and evaluate explanations as they solve a problem, and then uses machine learning to dynamically determine which explanation to present to a future learner, based on previous learners' collective input. Results from a case study deployment and a randomized experiment demonstrate that AXIS elicits and identifies explanations that learners find helpful. Providing explanations from AXIS also objectively enhanced learning, when compared to the default practice where learners solved problems and received answers without explanations. The rated quality and learning benefit of AXIS explanations did not differ from explanations generated by an experienced instructor.",
"Millions of learners today use how-to videos to master new skills in a variety of domains. But browsing such videos is often tedious and inefficient because video player interfaces are not optimized for the unique step-by-step structure of such videos. This research aims to improve the learning experience of existing how-to videos with step-by-step annotations. We first performed a formative study to verify that annotations are actually useful to learners. We created ToolScape, an interactive video player that displays step descriptions and intermediate result thumbnails in the video timeline. Learners in our study performed better and gained more self-efficacy using ToolScape versus a traditional video player. To add the needed step annotations to existing how-to videos at scale, we introduce a novel crowdsourcing workflow. It extracts step-by-step structure from an existing video, including step times, descriptions, and before and after images. We introduce the Find-Verify-Expand design pattern for temporal and visual annotation, which applies clustering, text processing, and visual analysis algorithms to merge crowd output. The workflow does not rely on domain-specific customization, works on top of existing videos, and recruits untrained crowd workers. We evaluated the workflow with Mechanical Turk, using 75 cooking, makeup, and Photoshop videos on YouTube. Results show that our workflow can extract steps with a quality comparable to that of trained annotators across all three domains with 77 precision and 81 recall."
]
} |
1606.09521 | 2462715873 | A central question for knowledge representation is how to encode and handle uncertain knowledge adequately. We introduce the probabilistic description logic ALCP that is designed for representing context-dependent knowledge, where the actual context taking place is uncertain. ALCP allows the expression of logical dependencies on the domain and probabilistic dependencies on the possible contexts. In order to draw probabilistic conclusions, we employ the principle of maximum entropy. We provide reasoning algorithms for this logic, and show that it satisfies several desirable properties of probabilistic logics. | Many probabilistic DLs have also been considered in the last decades @cite_15 @cite_6 @cite_20 . Our approach is closest to Bayesian DLs @cite_23 @cite_16 and disponte @cite_14 . The greatest difference with the former lies in the fact that KBs do not require a complete specification of the probability distribution, but only a set of probabilistic constraints. Moreover, the previous formalisms consider only the sceptical degree of belief, while we are interested in the full belief interval. In contrast to disponte , is capable of expressing both, logical and probabilistic dependencies between the axioms in a KB; in addition, disponte requires all uncertainty degrees to be assigned as mutually independent point probabilities, while allows for a more flexible specification. | {
"cite_N": [
"@cite_14",
"@cite_6",
"@cite_23",
"@cite_15",
"@cite_16",
"@cite_20"
],
"mid": [
"2405568589",
"1576921484",
"228152326",
"2132635174",
"1497210627",
""
],
"abstract": [
"We present DISPONTE, a semantics for probabilistic ontologies that is based on the distribution semantics for probabilistic logic programs. In DISPONTE the axioms of a probabilistic ontology can be annotated with an epistemic or a statistical probability. The epistemic probability represents a degree of confidence in the axiom, while the statistical probability considers the populations to which the axiom is applied.",
"We propose a new family of probabilistic description logics (DLs) that, in contrast to most existing approaches, are derived in a principled way from Halpern's probabilistic first-order logic. The resulting probabilistic DLs have a two-dimensional semantics similar to certain popular combinations of DLs with temporal logic and are well-suited for capturing subjective probabilities. Our main contribution is a detailed study of the complexity of reasoning in the new family of probabilistic DLs, showing that it ranges from PTIME for weak variants based on the lightweight DL EL to undecidable for some expressive variants based on the DL ALC.",
"We introduce the probabilistic Description Logic ( BEL ). In ( BEL ), axioms are required to hold only in an associated context. The probabilistic component of the logic is given by a Bayesian network that describes the joint probability distribution of the contexts. We study the main reasoning problems in this logic; in particular, we (i) prove that deciding positive and almost-sure entailments is not harder for ( BEL ) than for the BN, and (ii) show how to compute the probability, and the most likely context for a consequence.",
"Ontologies play a crucial role in the development of the Semantic Web as a means for defining shared terms in web resources. They are formulated in web ontology languages, which are based on expressive description logics. Significant research efforts in the semantic web community are recently directed towards representing and reasoning with uncertainty and vagueness in ontologies for the Semantic Web. In this paper, we give an overview of approaches in this context to managing probabilistic uncertainty, possibilistic uncertainty, and vagueness in expressive description logics for the Semantic Web.",
"The DL-Litefamily of tractable description logics lies between the semantic web languages RDFS and OWL Lite. In this paper, we present a probabilistic generalization of the DL-Litedescription logics, which is based on Bayesian networks. As an important feature, the new probabilistic description logics allow for flexibly combining terminological and assertional pieces of probabilistic knowledge. We show that the new probabilistic description logics are rich enough to properly extend both the DL-Litedescription logics as well as Bayesian networks. We also show that satisfiability checking and query processing in the new probabilistic description logics is reducible to satisfiability checking and query processing in the DL-Litefamily. Furthermore, we show that satisfiability checking and answering unions of conjunctive queries in the new logics can be done in LogSpace in the data complexity. For this reason, the new probabilistic description logics are very promising formalisms for data-intensive applications in the Semantic Web involving probabilistic uncertainty.",
""
]
} |
1606.09600 | 2474396346 | Machine Translation Quality Estimation is a notoriously difficult task, which lessens its usefulness in real-world translation environments. Such scenarios can be improved if quality predictions are accompanied by a measure of uncertainty. However, models in this task are traditionally evaluated only in terms of point estimate metrics, which do not take prediction uncertainty into account. We investigate probabilistic methods for Quality Estimation that can provide well-calibrated uncertainty estimates and evaluate them in terms of their full posterior predictive distributions. We also show how this posterior information can be useful in an asymmetric risk scenario, which aims to capture typical situations in translation workflows. | Quality Estimation is generally framed as text regression task, similarly to many other applications such as movie revenue forecasting based on reviews @cite_14 @cite_10 and detection of emotion strength in news headlines @cite_7 @cite_28 and song lyrics @cite_9 . In general, these applications are evaluated in terms of their point estimate predictions, arguably because not all of them employ probabilistic models. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_28",
"@cite_9",
"@cite_10"
],
"mid": [
"1783061888",
"2079521622",
"2133675239",
"",
"2252083640"
],
"abstract": [
"We consider the problem of predicting a movie's opening weekend revenue. Previous work on this problem has used metadata about a movie---e.g., its genre, MPAA rating, and cast---with very limited work making use of text about the movie. In this paper, we use the text of film critics' reviews from several sources to predict opening weekend revenue. We describe a new dataset pairing movie reviews with metadata and revenue data, and show that review text can substitute for metadata, and even improve over it, for prediction.",
"This paper describes experiments concerned with the automatic analysis of emotions in text. We describe the construction of a large data set annotated for six basic emotions: ANGER, DISGUST, FEAR, JOY, SADNESS and SURPRISE, and we propose and evaluate several knowledge-based and corpusbased methods for the automatic identification of these emotions in text.",
"We propose a model for jointly predicting multiple emotions in natural language sentences. Our model is based on a low-rank coregionalisation approach, which combines a vector-valued Gaussian Process with a rich parameterisation scheme. We show that our approach is able to learn correlations and anti-correlations between emotions on a news headlines dataset. The proposed model outperforms both singletask baselines and other multi-task approaches.",
"",
"Text regression has traditionally been tackled using linear models. Here we present a non-linear method based on a deep convolutional neural network. We show that despite having millions of parameters, this model can be trained on only a thousand documents, resulting in a 40 relative improvement over sparse linear models, the previous state of the art. Further, this method is flexible allowing for easy incorporation of side information such as document meta-data. Finally we present a novel technique for interpreting the effect of different text inputs on this complex non-linear model."
]
} |
1606.09600 | 2474396346 | Machine Translation Quality Estimation is a notoriously difficult task, which lessens its usefulness in real-world translation environments. Such scenarios can be improved if quality predictions are accompanied by a measure of uncertainty. However, models in this task are traditionally evaluated only in terms of point estimate metrics, which do not take prediction uncertainty into account. We investigate probabilistic methods for Quality Estimation that can provide well-calibrated uncertainty estimates and evaluate them in terms of their full posterior predictive distributions. We also show how this posterior information can be useful in an asymmetric risk scenario, which aims to capture typical situations in translation workflows. | The NLPD is common and established metric used in the GP literature to evaluate new approaches. Examples include the original work on Warped GPs @cite_25 , but also others like and . It has also been used to evaluate recent work on uncertainty propagation methods for neural networks @cite_16 . | {
"cite_N": [
"@cite_16",
"@cite_25"
],
"mid": [
"2950177356",
"2134477639"
],
"abstract": [
"Large multilayer neural networks trained with backpropagation have recently achieved state-of-the-art results in a wide range of problems. However, using backprop for neural net learning still has some disadvantages, e.g., having to tune a large number of hyperparameters to the data, lack of calibrated probabilistic predictions, and a tendency to overfit the training data. In principle, the Bayesian approach to learning neural networks does not have these problems. However, existing Bayesian techniques lack scalability to large dataset and network sizes. In this work we present a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP). Similar to classical backpropagation, PBP works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients. A series of experiments on ten real-world datasets show that PBP is significantly faster than other techniques, while offering competitive predictive abilities. Our experiments also show that PBP provides accurate estimates of the posterior variance on the network weights.",
"We generalise the Gaussian process (GP) framework for regression by learning a nonlinear transformation of the GP outputs. This allows for non-Gaussian processes and non-Gaussian noise. The learning algorithm chooses a nonlinear transformation such that transformed data is well-modelled by a GP. This can be seen as including a preprocessing transformation as an integral part of the probabilistic modelling problem, rather than as an ad-hoc step. We demonstrate on several real regression problems that learning the transformation can lead to significantly better performance than using a regular GP, or a GP with a fixed transformation."
]
} |
1606.09600 | 2474396346 | Machine Translation Quality Estimation is a notoriously difficult task, which lessens its usefulness in real-world translation environments. Such scenarios can be improved if quality predictions are accompanied by a measure of uncertainty. However, models in this task are traditionally evaluated only in terms of point estimate metrics, which do not take prediction uncertainty into account. We investigate probabilistic methods for Quality Estimation that can provide well-calibrated uncertainty estimates and evaluate them in terms of their full posterior predictive distributions. We also show how this posterior information can be useful in an asymmetric risk scenario, which aims to capture typical situations in translation workflows. | Asymmetric loss functions are common in the econometrics literature and were studied by and , among others. Besides the AL and the linex, another well studied loss is the asymmetric quadratic, which in turn relates to the concept of expectiles @cite_21 . This loss generalises the commonly used squared error loss. In terms of applications, gives an example in real estate assessment, where the consequences of under- and over-assessment are usually different depending on the specific scenario. An engineering example is given by in the context of dam construction, where an underestimate of peak water level is much more serious than an overestimate. Such real-world applications guided many developments in this field: we believe that translation and other language processing scenarios which rely on NLP technologies can heavily benefit from these advancements. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2042590947"
],
"abstract": [
"This paper considers estimation and testing using location measures for regression m odels that are based on an asymmetric least-squares criterion functio n. These estimators have properties that are analogous to regression quantiles, but are easier to calculate, as are the corresponding test statistics. Asymmetric least-squares tests of homoskedasticity and s ymmetry compare quite favorably with other tests of these hypotheses in terms of asymptotic relative efficiency. Consequently, asymmetric least-squares estimation provides a convenient and relatively efficie nt method of characterizing the conditional distributi on of a dependent variable given some regressors. Copyright 1987 by The Econometric Society."
]
} |
1606.09604 | 2465617850 | We propose an approach for biomedical information extraction that marries the advantages of machine learning models, e.g., learning directly from data, with the benefits of rule-based approaches, e.g., interpretability. Our approach starts by training a feature-based statistical model, then converts this model to a rule-based variant by converting its features to rules, and "snapping to grid" the feature weights to discrete votes. In doing so, our proposal takes advantage of the large body of work in machine learning, but it produces an interpretable model, which can be directly edited by experts. We evaluate our approach on the BioNLP 2009 event extraction task. Our results show that there is a small performance penalty when converting the statistical model to rules, but the gain in interpretability compensates for that: with minimal effort, human experts improve this model to have similar performance to the statistical model that served as starting point. | Most of the biomedical IE systems in academia rely on supervised machine learning. This includes the top performing system at the BioNLP 2009 shared task @cite_14 , as well as several following approaches that improve upon its performance @cite_0 @cite_2 @cite_26 @cite_31 @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_26",
"@cite_0",
"@cite_2",
"@cite_31"
],
"mid": [
"2251644265",
"2075655036",
"2110408238",
"2108706252",
"2106921868",
"2097300420"
],
"abstract": [
"Several state-of-the-art event extraction systems employ models based on Support Vector Machines (SVMs) in a pipeline architecture, which fails to exploit the joint dependencies that typically exist among events and arguments. While there have been attempts to overcome this limitation using Markov Logic Networks (MLNs), it remains challenging to perform joint inference in MLNs when the model encodes many high-dimensional sophisticated features such as those essential for event extraction. In this paper, we propose a new model for event extraction that combines the power of MLNs and SVMs, dwarfing their limitations. The key idea is to reliably learn and process high-dimensional features using SVMs; encode the output of SVMs as low-dimensional, soft formulas in MLNs; and use the superior joint inferencing power of MLNs to enforce joint consistency constraints over the soft formulas. We evaluate our approach for the task of extracting biomedical events on the BioNLP 2013, 2011 and 2009 Genia shared task datasets. Our approach yields the best F1 score to date on the BioNLP’13 (53.61) and BioNLP’11 (58.07) datasets and the second-best F1 score to date on the BioNLP’09 dataset (58.16).",
"We describe a system for extracting complex events among genes and proteins from biomedical literature, developed in context of the BioNLP'09 Shared Task on Event Extraction. For each event, its text trigger, class, and arguments are extracted. In contrast to the prevailing approaches in the domain, events can be arguments of other events, resulting in a nested structure that better captures the underlying biological statements. We divide the task into independent steps which we approach as machine learning problems. We define a wide array of features and in particular make extensive use of dependency parse graphs. A rule-based post-processing step is used to refine the output in accordance with the restrictions of the extraction task. In the shared task evaluation, the system achieved an F-score of 51.95 on the primary task, the best performance among the participants.",
"Motivation: In recent years, several biomedical event extraction (EE) systems have been developed. However, the nature of the annotated training corpora, as well as the training process itself, can limit the performance levels of the trained EE systems. In particular, most event-annotated corpora do not deal adequately with coreference. This impacts on the trained systems' ability to recognize biomedical entities, thus affecting their performance in extracting events accurately. Additionally, the fact that most EE systems are trained on a single annotated corpus further restricts their coverage. Results: We have enhanced our existing EE system, EventMine, in two ways. First, we developed a new coreference resolution (CR) system and integrated it with EventMine. The standalone performance of our CR system in resolving anaphoric references to proteins is considerably higher than the best ranked system in the COREF subtask of the BioNLP'11 Shared Task. Secondly, the improved EventMine incorporates domain adaptation (DA) methods, which extend EE coverage by allowing several different annotated corpora to be used during training. Combined with a novel set of methods to increase the generality and efficiency of EventMine, the integration of both CR and DA have resulted in significant improvements in EE, ranging between 0.5 and 3.4 F-Score. The enhanced EventMine outperforms the highest ranked systems from the BioNLP'09 shared task, and from the GENIA and Infectious Diseases subtasks of the BioNLP'11 shared task. Availability: The improved version of EventMine, incorporating the CR system and DA methods, is available at: http: www.nactem.ac.uk EventMine . Contact: makoto.miwa@manchester.ac.uk",
"Biomedical Natural Language Processing (BioNLP) attempts to capture biomedical phenomena from texts by extracting relations between biomedical entities (i.e. proteins and genes). Traditionally, only binary relations have been extracted from large numbers of published papers. Recently, more complex relations (biomolecular events) have also been extracted. Such events may include several entities or other relations. To evaluate the performance of the text mining systems, several shared task challenges have been arranged for the BioNLP community. With a common and consistent task setting, the BioNLP'09 shared task evaluated complex biomolecular events such as binding and regulation.Finding these events automatically is important in order to improve biomedical event extraction systems. In the present paper, we propose an automatic event extraction system, which contains a model for complex events, by solving a classification problem with rich features. The main contributions of the present paper are: (1) the proposal of an effective bio-event detection method using machine learning, (2) provision of a high-performance event extraction system, and (3) the execution of a quantitative error analysis. The proposed complex (binding and regulation) event detector outperforms the best system from the BioNLP'09 shared task challenge.",
"Background We explore techniques for performing model combination between the UMass and Stanford biomedical event extraction systems. Both sub-components address event extraction as a structured prediction problem, and use dual decomposition (UMass) and parsing algorithms (Stanford) to find the best scoring event structure. Our primary focus is on stacking where the predictions from the Stanford system are used as features in the UMass system. For comparison, we look at simpler model combination techniques such as intersection and union which require only the outputs from each system and combine them directly.",
"Motivation: The abundance of biomedical literature has attracted significant interest in novel methods to automatically extract biomedical relations from the literature. Until recently, most research was focused on extracting binary relations such as protein–protein interactions and drug–disease relations. However, these binary relations cannot fully represent the original biomedical data. Therefore, there is a need for methods that can extract fine-grained and complex relations known as biomedical events. Results: In this article we propose a novel method to extract biomedical events from text. Our method consists of two phases. In the first phase, training data are mapped into structured representations. Based on that, templates are used to extract rules automatically. In the second phase, extraction methods are developed to process the obtained rules. When evaluated against the Genia event extraction abstract and full-text test datasets (Task 1), we obtain results with F-scores of 52.34 and 53.34, respectively, which are comparable to the state-of-the-art systems. Furthermore, our system achieves superior performance in terms of computational efficiency. Availability: Our source code is available for academic use at http: dl.dropbox.com u 10256952 BioEvent.zip Contact: [email protected]"
]
} |
1606.09604 | 2465617850 | We propose an approach for biomedical information extraction that marries the advantages of machine learning models, e.g., learning directly from data, with the benefits of rule-based approaches, e.g., interpretability. Our approach starts by training a feature-based statistical model, then converts this model to a rule-based variant by converting its features to rules, and "snapping to grid" the feature weights to discrete votes. In doing so, our proposal takes advantage of the large body of work in machine learning, but it produces an interpretable model, which can be directly edited by experts. We evaluate our approach on the BioNLP 2009 event extraction task. Our results show that there is a small performance penalty when converting the statistical model to rules, but the gain in interpretability compensates for that: with minimal effort, human experts improve this model to have similar performance to the statistical model that served as starting point. | We bring together these two diverging directions by combining the advantages of ML with the interpretability of rule-based approaches. By representing the model as a collection of declarative rules, experts can directly edit the model, thus guaranteeing that the desired changes are actually applied. This is in contrast with methods such as active learning, in which the learning algorithm presents the human in the loop'' with new examples to annotate @cite_7 . Although active learning may require less domain expertise than our proposal, it generally does not guarantee that the examples provided are actually propagated in the model (the learning algorithm may choose to override them with other data). | {
"cite_N": [
"@cite_7"
],
"mid": [
"2114663556"
],
"abstract": [
"In natural language acquisition, it is dicult to gather the annotated data needed for supervised learning; however, unannotated data is fairly plentiful. Active learning methods attempt to select for annotation and training only the most informative examples, and therefore are potentially very useful in natural language applications. However, existing results for active learning have only considered standard classication tasks. To reduce annotation eort while maintaining accuracy, we apply active learning to two non-classication tasks in natural language processing: semantic parsing and information extraction. We show that active learning can signicantly reduce the number of annotated examples required to achieve a given level of performance for these complex tasks."
]
} |
1606.09349 | 2473825312 | Zero-shot learning (ZSL) extends the conventional image classification technique to a more challenging situation where the test image categories are not seen in the training samples. Most studies on ZSL utilize side information such as attributes or word vectors to bridge the relations between the seen classes and the unseen classes. However, existing approaches on ZSL typically exploit a shared space for each type of side information independently, which cannot make full use of the complementary knowledge of different types of side information. To this end, this paper presents an MBFA-ZSL approach to embed different types of side information as well as the visual feature into one shared space. Specifically, we first develop an algorithm named Multi-Battery Factor Analysis (MBFA) to build a unified semantic space, and then employ multiple types of side information in it to achieve the ZSL. The close-form solution makes MBFA-ZSL simple to implement and efficient to run on large datasets. Extensive experiments on the popular AwA, CUB, and SUN datasets show its significant superiority over the state-of-the-art approaches. | 1em Zero-shot learning (ZSL) aims at solving the problem when the new test image categories are not seen in the training samples @cite_21 . Different from the open set recognition and novelty detection which only distinguish abnormalities in the testing data, ZSL seeks to classify the unseen testing classes @cite_28 . This is a practical problem setting in image classification, as there are thousands of categories of objects we intend to recognize, but only a few of them may have been appropriately annotated. Consequently, it is more challenging than the conventional image classification problems. The key ideas of ZSL are to choose better side information (also known as modalities) and to develop an effective common semantic space. The side information provides a bridge to transfer knowledge from the seen classes for which we have training data to the unseen classes for which we do not, and the common space offers a fusion feasibility for the visual features and the side information. | {
"cite_N": [
"@cite_28",
"@cite_21"
],
"mid": [
"2115627867",
"2150295085"
],
"abstract": [
"Novelty detection is the task of classifying test data that differ in some respect from the data that are available during training. This may be seen as ''one-class classification'', in which a model is constructed to describe ''normal'' training data. The novelty detection approach is typically used when the quantity of available ''abnormal'' data is insufficient to construct explicit models for non-normal classes. Application includes inference in datasets from critical systems, where the quantity of available normal data is very large, such that ''normality'' may be accurately modelled. In this review we aim to provide an updated and structured investigation of novelty detection research papers that have appeared in the machine learning literature during the last decade.",
"We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words."
]
} |
1606.09349 | 2473825312 | Zero-shot learning (ZSL) extends the conventional image classification technique to a more challenging situation where the test image categories are not seen in the training samples. Most studies on ZSL utilize side information such as attributes or word vectors to bridge the relations between the seen classes and the unseen classes. However, existing approaches on ZSL typically exploit a shared space for each type of side information independently, which cannot make full use of the complementary knowledge of different types of side information. To this end, this paper presents an MBFA-ZSL approach to embed different types of side information as well as the visual feature into one shared space. Specifically, we first develop an algorithm named Multi-Battery Factor Analysis (MBFA) to build a unified semantic space, and then employ multiple types of side information in it to achieve the ZSL. The close-form solution makes MBFA-ZSL simple to implement and efficient to run on large datasets. Extensive experiments on the popular AwA, CUB, and SUN datasets show its significant superiority over the state-of-the-art approaches. | 1em Two types of commonly used side information in ZSL are attributes @cite_22 @cite_7 @cite_12 @cite_29 and word vectors @cite_13 , @cite_8 . Particularly, attributes act as intermediate representations shared across multiple classes, indicating the presence or absence of several predefined properties. Direct attribute prediction (DAP) @cite_22 is one of the first efforts to exploit the attributes to ZSL. It learns attribute-specific classifiers with the seen data and infers the unseen class with the learned estimators. However, attribute-based approaches suffer from a poor scalability as the attributes ontology for each class is generally manually defined. Word-vector-based approaches @cite_25 @cite_10 @cite_30 @cite_3 avoid this limitation since word vectors are extracted from a linguistic corpus with neural language models such as GolVe @cite_13 and Word2Vec @cite_8 . Therefore, word vectors have become another popular side information in ZSL. For instance, Socher @cite_10 construct a two layer neural network to project images into the word vector space. In @cite_3 , Frome present a deep visual-semantic embedding model with a hinge loss function, which trains a linear mapping to link the image visual space to the word vector space. | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_10",
"@cite_29",
"@cite_3",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"",
"2134270519",
"",
"2950133940",
"2950276680",
"652269744",
"2123024445",
"2250539671",
"2950700180",
""
],
"abstract": [
"",
"We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.",
"",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.",
"Zero-shot learning consists in learning how to recognise new concepts by just having a description of them. Many sophisticated approaches have been proposed to address the challenges this problem comprises. In this paper we describe a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. The approach is based on a more general framework which models the relationships between features, attributes, and classes as a two linear layers network, where the weights of the top layer are not learned but are given by the environment. We further provide a learning bound on the generalisation error of this kind of approaches, by casting them as domain adaptation methods. In experiments carried out on three standard real datasets, we found that our approach is able to perform significantly better than the state of art on all of them, obtaining a ratio of improvement up to 17 .",
"Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"Several recent publications have proposed methods for mapping images into continuous semantic embedding spaces. In some cases the embedding space is trained jointly with the image transformation. In other cases the semantic embedding space is established by an independent natural language processing task, and then the image transformation into that space is learned in a second stage. Proponents of these image embedding systems have stressed their advantages over the traditional classification framing of image understanding, particularly in terms of the promise for zero-shot learning -- the ability to correctly annotate images of previously unseen object categories. In this paper, we propose a simple method for constructing an image embedding system from any existing image classifier and a semantic word embedding model, which contains the @math class labels in its vocabulary. Our method maps images into the semantic embedding space via convex combination of the class label embedding vectors, and requires no additional training. We show that this simple and direct method confers many of the advantages associated with more complex image embedding schemes, and indeed outperforms state of the art methods on the ImageNet zero-shot learning task.",
""
]
} |
1606.09349 | 2473825312 | Zero-shot learning (ZSL) extends the conventional image classification technique to a more challenging situation where the test image categories are not seen in the training samples. Most studies on ZSL utilize side information such as attributes or word vectors to bridge the relations between the seen classes and the unseen classes. However, existing approaches on ZSL typically exploit a shared space for each type of side information independently, which cannot make full use of the complementary knowledge of different types of side information. To this end, this paper presents an MBFA-ZSL approach to embed different types of side information as well as the visual feature into one shared space. Specifically, we first develop an algorithm named Multi-Battery Factor Analysis (MBFA) to build a unified semantic space, and then employ multiple types of side information in it to achieve the ZSL. The close-form solution makes MBFA-ZSL simple to implement and efficient to run on large datasets. Extensive experiments on the popular AwA, CUB, and SUN datasets show its significant superiority over the state-of-the-art approaches. | 1em It is worth highlighting several aspects of the proposed MBFA-ZSL approach. (1) It develops an advanced multi-view embedding algorithm named Multi-Battery Factor Analysis (MBFA), which extends Tucker's Inter-Battery Factor Analysis (IBFA) @cite_15 . (2) As far as we know, it represents one of the first attempts that embeds both the image visual features and multiple types of side information into one unified semantic space, which fully utilizes the interrelations among different types of information. (3) The close-form solution makes it simple to implement and efficient to run on large datasets. (4) Extensive experiments on popular datasets demonstrate its significant superiority over the existing state-of-the-art approaches. 1em The reminder of this paper is structured as follows. Section 2 introduces the proposed Multi-Battery Factor Analysis (MBFA) algorithm, and Section 3 describes the proposed MBFA-ZSL approach in detail. Experimental results are presented in Section 4, and conclusions are drawn in the final section. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2087962626"
],
"abstract": [
"The inter-battery method of factor analysis was devised to provide information relevant to the stability of factors over different selections of tests. Two batteries of tests, postulated to depend on the same common factors, but not parallel tests, are given to one sample of individuals. Factors are determined from the correlation of the tests in one battery with the tests in the other battery. These factors are only those that are common to the two batteries. No communality estimates are required. A statistical test is provided for judging the minimum number of factors involved. Rotation of axes is carried out independently for the two batteries. A final step provides the correlation between factors determined by scores on the tests in the two batteries. The correlations between corresponding factors are taken as factor reliability coefficients."
]
} |
1606.09229 | 2549606326 | Dissipationless hydrodynamics regularized by dispersion describe a number of physical media including water waves, nonlinear optics, and Bose--Einstein condensates. As in the classical theory of hyperbolic equations where a nonconvex flux leads to nonclassical solution structures, a nonconvex linear dispersion relation provides an intriguing dispersive hydrodynamic analogue. Here, the fifth order Korteweg--de Vries (KdV) equation, also known as the Kawahara equation, a classical model for shallow water waves, is shown to be a universal model of Eulerian hydrodynamics with higher order dispersive effects. Utilizing asymptotic methods and numerical computations, this work classifies the long-time behavior of solutions for step-like initial data. For convex dispersion, the result is a dispersive shock wave (DSW), qualitatively and quantitatively bearing close resemblance to the KdV DSW. For nonconvex dispersion, three distinct dynamic regimes are observed. For small amplitude jumps, a perturbed KdV DSW with ... | Most DSW studies to date have focused upon dispersive hydrodynamic systems that exhibit either a purely convex or concave linear dispersion relation @cite_13 , with some recent exceptions @cite_27 @cite_12 @cite_21 @cite_7 @cite_19 @cite_20 @cite_26 @cite_2 . The monograph @cite_27 and paper @cite_20 present numerical simulation results for the Kawahara equation kawahara for both short-time @cite_20 and long-time @cite_27 dynamics. These simulation results resemble the types of shock waves we characterize in long-time in this work. | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_21",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_20",
"@cite_13",
"@cite_12"
],
"mid": [
"2228980711",
"",
"2145213441",
"2055609422",
"",
"2059251094",
"2963463632",
"2278222950",
"2072732887"
],
"abstract": [
"We consider the step Riemann problem for the system of equations describing the propagation of a coherent light beam in nematic liquid crystals, which is a general system describing nonlinear wave propagation in a number of different physical applications. While the equation governing the light beam is of defocusing nonlinear Schrodinger (NLS) equation type, the dispersive shock wave (DSW) generated from this initial condition has major differences from the standard DSW solution of the defocusing NLS equation. In particular, it is found that the DSW has positive polarity and generates resonant radiation which propagates ahead of it. Remarkably, the velocity of the lead soliton of the DSW is determined by the classical shock velocity. The solution for the radiative wavetrain is obtained using the Wentzel–Kramers–Brillouin approximation. It is shown that for sufficiently small initial jumps the nematic DSW is asymptotically governed by a Korteweg–de Vries equation with the fifth-order dispersion, which explicitly shows the resonance generating the radiation ahead of the DSW. The constructed asymptotic theory is shown to be in good agreement with the results of direct numerical simulations.",
"",
"We show that pulses undergoing wave breaking in nonlinear weakly dispersive fibers radiate, owing to phase-matching (assisted by higher-order dispersion) of linear dispersive waves with the shock-wave front. Our theoretical results perfectly explain the radiation observed recently from pulses propagating in the normal dispersion (i.e., nonsolitonic) regime.",
"Fundamental physical phenomena such as laser-induced ionization, driven quantum tunneling, Faraday waves, Bogoliubov quasiparticle excitations, and the control of new states of matter rely on time-periodic driving of the system. A remarkable property of such driving is that it can induce the localized (bound) states to resonantly couple to the continuum. Therefore experiments that allow for enlightening and controlling the mechanisms underlying such coupling are of paramount importance. We implement such an experiment in a special optical fiber characterized by a dispersion oscillating along the propagation coordinate, which mimics “time”. The quasi-momentum associated with such periodic perturbation is responsible for the efficient coupling of energy from the localized wave-packets (solitons in anomalous dispersion and shock fronts in normal dispersion) sustained by the fiber nonlinearity, into free-running linear dispersive waves (continuum) at multiple resonant frequencies. Remarkably, the observed resonances can be explained by means of a unified approach, regardless of the fact that the localized state is a soliton-like pulse or a shock front.",
"",
"We show that passive Kerr resonators pumped close to zero dispersion wavelengths on the normal dispersion side can develop the resonant generation of linear waves driven by cavity (mixed dispersive–dissipative) shock waves. The resonance mechanism can be successfully described in the framework of the generalized Lugiato–Lefever equation with higher-order dispersive terms. Substantial differences with radiation from cavity solitons and purely dispersive shock waves dispersion are highlighted.",
"This article is concerned with a conjecture in [B. Dubrovin, Comm. Math. Phys., 267 (2006), pp. 117–139] on the formation of dispersive shocks in a class of Hamiltonian dispersive regularizations of the quasi-linear transport equation. The regularizations are characterized by two arbitrary functions of one variable, where the condition of integrability implies that one of these functions must not vanish. It is shown numerically for a large class of equations that the local behavior of their solution near the point of gradient catastrophe for the transport equation is described by a special solution of a Painleve-type equation. This local description holds also for solutions to equations where blowup can occur in finite time. Furthermore, it is shown that a solution of the dispersive equations away from the point of gradient catastrophe is approximated by a solution of the transport equation with the same initial data, modulo terms of order @math , where @math is the small dispersion parameter...",
"There is growing physical and mathematical interest in the hydrodynamics of dissipationless dispersive media. Since G.B. Whitham’s seminal publication fifty years ago that ushered in the mathematical study of dispersive hydrodynamics, there has been a significant body of work in this area. However, there has been no comprehensive survey of the field of dispersive hydrodynamics. Utilizing Whitham’s averaging theory as the primary mathematical tool, we review the rich mathematical developments over the past fifty years with an emphasis on physical applications. The fundamental, large scale, coherent excitation in dispersive hydrodynamic systems is an expanding, oscillatory dispersive shock wave or DSW. Both the macroscopic and microscopic properties of DSWs are analyzed in detail within the context of the universal, integrable, and foundational models for uni-directional (Korteweg–de Vries equation) and bi-directional (Nonlinear Schrodinger equation) dispersive hydrodynamics. A DSW fitting procedure that does not rely upon integrable structure yet reveals important macroscopic DSW properties is described. DSW theory is then applied to a number of physical applications: superfluids, nonlinear optics, geophysics, and fluid dynamics. Finally, we survey some of the more recent developments including non-classical DSWs, DSW interactions, DSWs in perturbed and inhomogeneous environments, and two-dimensional, oblique DSWs.",
"We show that dispersive shock waves resulting from the nonlinearity overbalancing a weak leading-order dispersion can emit resonant radiation owing to higher-order dispersive contributions. We analyze such phenomenon for the defocusing nonlinear Schroedinger equation, giving criteria for calculating the radiated frequency based on the estimate of the shock velocity, revealing also a diversity of possible scenarios depending on the order and magnitude of the dispersive corrections."
]
} |
1606.09229 | 2549606326 | Dissipationless hydrodynamics regularized by dispersion describe a number of physical media including water waves, nonlinear optics, and Bose--Einstein condensates. As in the classical theory of hyperbolic equations where a nonconvex flux leads to nonclassical solution structures, a nonconvex linear dispersion relation provides an intriguing dispersive hydrodynamic analogue. Here, the fifth order Korteweg--de Vries (KdV) equation, also known as the Kawahara equation, a classical model for shallow water waves, is shown to be a universal model of Eulerian hydrodynamics with higher order dispersive effects. Utilizing asymptotic methods and numerical computations, this work classifies the long-time behavior of solutions for step-like initial data. For convex dispersion, the result is a dispersive shock wave (DSW), qualitatively and quantitatively bearing close resemblance to the KdV DSW. For nonconvex dispersion, three distinct dynamic regimes are observed. For small amplitude jumps, a perturbed KdV DSW with ... | In @cite_16 , a scalar dispersive hydrodynamic model is shown to exhibit KdV-like DSWs until a critical jump height is reached corresponding to zero curvature at the harmonic wave edge. Further increase of the jump height results in the internal, self-interaction of the DSW or, as it was termed, DSW implosion. Zero curvature results in a local extremum of the group velocity that causes the harmonic edge waves within the DSW to interact with interior DSW waves of smaller wavenumber. Qualitatively different dispersive hydrodynamics near zero dispersion were observed in Nonlinear Schr " o dinger (NLS) type models of intense light propagation through fibers @cite_21 @cite_12 @cite_2 @cite_7 @cite_19 and nematic liquid crystals @cite_26 . In both cases, numerical simulations reveal that an essentially linear wavetrain's phase speed is in resonance with the DSW's solitary wave edge phase speed, leading to radiation. As the strength of higher order dispersion is increased, the DSW structure changes. Empirical observations suggest that in long time, the solitary wave edge exhibits a constant speed moving with the shock speed from the Rankine-Hugoniot conditions. This behavior is in stark contrast with KdV-like DSWs, whose speeds are determined through Whitham averaging @cite_13 . | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_21",
"@cite_19",
"@cite_2",
"@cite_16",
"@cite_13",
"@cite_12"
],
"mid": [
"2228980711",
"",
"2145213441",
"2055609422",
"2059251094",
"2055457008",
"2278222950",
"2072732887"
],
"abstract": [
"We consider the step Riemann problem for the system of equations describing the propagation of a coherent light beam in nematic liquid crystals, which is a general system describing nonlinear wave propagation in a number of different physical applications. While the equation governing the light beam is of defocusing nonlinear Schrodinger (NLS) equation type, the dispersive shock wave (DSW) generated from this initial condition has major differences from the standard DSW solution of the defocusing NLS equation. In particular, it is found that the DSW has positive polarity and generates resonant radiation which propagates ahead of it. Remarkably, the velocity of the lead soliton of the DSW is determined by the classical shock velocity. The solution for the radiative wavetrain is obtained using the Wentzel–Kramers–Brillouin approximation. It is shown that for sufficiently small initial jumps the nematic DSW is asymptotically governed by a Korteweg–de Vries equation with the fifth-order dispersion, which explicitly shows the resonance generating the radiation ahead of the DSW. The constructed asymptotic theory is shown to be in good agreement with the results of direct numerical simulations.",
"",
"We show that pulses undergoing wave breaking in nonlinear weakly dispersive fibers radiate, owing to phase-matching (assisted by higher-order dispersion) of linear dispersive waves with the shock-wave front. Our theoretical results perfectly explain the radiation observed recently from pulses propagating in the normal dispersion (i.e., nonsolitonic) regime.",
"Fundamental physical phenomena such as laser-induced ionization, driven quantum tunneling, Faraday waves, Bogoliubov quasiparticle excitations, and the control of new states of matter rely on time-periodic driving of the system. A remarkable property of such driving is that it can induce the localized (bound) states to resonantly couple to the continuum. Therefore experiments that allow for enlightening and controlling the mechanisms underlying such coupling are of paramount importance. We implement such an experiment in a special optical fiber characterized by a dispersion oscillating along the propagation coordinate, which mimics “time”. The quasi-momentum associated with such periodic perturbation is responsible for the efficient coupling of energy from the localized wave-packets (solitons in anomalous dispersion and shock fronts in normal dispersion) sustained by the fiber nonlinearity, into free-running linear dispersive waves (continuum) at multiple resonant frequencies. Remarkably, the observed resonances can be explained by means of a unified approach, regardless of the fact that the localized state is a soliton-like pulse or a shock front.",
"We show that passive Kerr resonators pumped close to zero dispersion wavelengths on the normal dispersion side can develop the resonant generation of linear waves driven by cavity (mixed dispersive–dissipative) shock waves. The resonance mechanism can be successfully described in the framework of the generalized Lugiato–Lefever equation with higher-order dispersive terms. Substantial differences with radiation from cavity solitons and purely dispersive shock waves dispersion are highlighted.",
"The viscously dominated, low Reynolds' number dynamics of multi-phase, compacting media can lead to nonlinear, dissipationless dispersive behavior when viewed appropriately. In these systems, nonlinear self-steepening competes with wave dispersion, giving rise to dispersive shock waves (DSWs). Example systems considered here include magma migration through the mantle as well as the buoyant ascent of a low density fluid through a viscously deformable conduit. These flows are modeled by a third-order, degenerate, dispersive, nonlinear wave equation for the porosity (magma volume fraction) or cross-sectional area, respectively. Whitham averaging theory for step initial conditions is used to compute analytical, closed form predictions for the DSW speeds and the leading edge amplitude in terms of the constitutive parameters and initial jump height. Novel physical behaviors are identified including backflow and DSW implosion for initial jumps sufficient to cause gradient catastrophe in the Whitham modulation equations. Theoretical predictions are shown to be in excellent agreement with long-time numerical simulations for the case of small to moderate amplitude DSWs. Verifiable criteria identifying the breakdown of this modulation theory in the large jump regime, applicable to a wide class of DSW problems, are presented.",
"There is growing physical and mathematical interest in the hydrodynamics of dissipationless dispersive media. Since G.B. Whitham’s seminal publication fifty years ago that ushered in the mathematical study of dispersive hydrodynamics, there has been a significant body of work in this area. However, there has been no comprehensive survey of the field of dispersive hydrodynamics. Utilizing Whitham’s averaging theory as the primary mathematical tool, we review the rich mathematical developments over the past fifty years with an emphasis on physical applications. The fundamental, large scale, coherent excitation in dispersive hydrodynamic systems is an expanding, oscillatory dispersive shock wave or DSW. Both the macroscopic and microscopic properties of DSWs are analyzed in detail within the context of the universal, integrable, and foundational models for uni-directional (Korteweg–de Vries equation) and bi-directional (Nonlinear Schrodinger equation) dispersive hydrodynamics. A DSW fitting procedure that does not rely upon integrable structure yet reveals important macroscopic DSW properties is described. DSW theory is then applied to a number of physical applications: superfluids, nonlinear optics, geophysics, and fluid dynamics. Finally, we survey some of the more recent developments including non-classical DSWs, DSW interactions, DSWs in perturbed and inhomogeneous environments, and two-dimensional, oblique DSWs.",
"We show that dispersive shock waves resulting from the nonlinearity overbalancing a weak leading-order dispersion can emit resonant radiation owing to higher-order dispersive contributions. We analyze such phenomenon for the defocusing nonlinear Schroedinger equation, giving criteria for calculating the radiated frequency based on the estimate of the shock velocity, revealing also a diversity of possible scenarios depending on the order and magnitude of the dispersive corrections."
]
} |
1606.09229 | 2549606326 | Dissipationless hydrodynamics regularized by dispersion describe a number of physical media including water waves, nonlinear optics, and Bose--Einstein condensates. As in the classical theory of hyperbolic equations where a nonconvex flux leads to nonclassical solution structures, a nonconvex linear dispersion relation provides an intriguing dispersive hydrodynamic analogue. Here, the fifth order Korteweg--de Vries (KdV) equation, also known as the Kawahara equation, a classical model for shallow water waves, is shown to be a universal model of Eulerian hydrodynamics with higher order dispersive effects. Utilizing asymptotic methods and numerical computations, this work classifies the long-time behavior of solutions for step-like initial data. For convex dispersion, the result is a dispersive shock wave (DSW), qualitatively and quantitatively bearing close resemblance to the KdV DSW. For nonconvex dispersion, three distinct dynamic regimes are observed. For small amplitude jumps, a perturbed KdV DSW with ... | Because DSWs can be considered spatially extended generalizations of solitary waves, it is helpful to briefly review the properties of solitary wave solutions of eq. kawahara , first computed by Kawahara @cite_11 . Distinct structures emerge depending on the choice of the parameter @math . For @math , the solitary waves are monotonically decaying from the peak. For @math , there are non-monotonically decaying, depression solitary waves for velocities less than @math that are stable @cite_24 . These oscillatory solitary waves bifurcate from the linear dispersion curve eq:2 when the phase and group velocities coincide @cite_33 . The equality of phase and group velocities occurs only for non-convex dispersion @math . For @math and positive velocities, elevation solitary waves exist but are unstable due to a linear resonance @cite_4 @cite_9 . It is the Kawahara equation's non-convex dispersion that leads to solitary waves embedded in the linear spectrum @cite_32 . As we will demonstrate, non-convex dispersion yields similarly impactful effects on DSW dynamics. | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_9",
"@cite_32",
"@cite_24",
"@cite_11"
],
"mid": [
"2032741615",
"",
"2069098156",
"2147057598",
"2129300940",
"2030484509"
],
"abstract": [
"Abstract We consider a fifth-order KdV equation, where the fifth-order derivative term is multiplied by a small parameter. It has been conjectured that this equation admits a non-local solitary wave solution which has a central core and an oscillatory tail either behind or in front of the core. We prove that this solution cannot be exactly steady, and instead the amplitude of the central core decays due to the energy flux generated in the oscillatory tail. The decay rate is calculated in the limit as the parameter tends to zero. In order to verify the analytical results, we have developed a high-precision spectral method for numerical integration of this equation. The analytical and numerical result show good agreement.",
"",
"Abstract We investigate the stability of a solitary wave solution of the Korteweg-de Vries equation when a fifth order spatial derivative term is added. We show that the solution ceases to be strictly localized but develops an infinite oscillating tail and we compute the amplitude of the latter.",
"Evolution of perturbed embedded solitons in the general Hamiltonian fifth-order Korteweg–de Vries (KdV) equation is studied. When an embedded soliton is perturbed, it sheds a one-directional continuous-wave radiation. It is shown that the radiation amplitude is not minimal in general. A dynamical equation for velocity of the perturbed embedded soliton is derived. This equation shows that a neutrally stable embedded soliton is in fact semi-stable. When the perturbation increases the momentum of the embedded soliton, the perturbed state approaches asymptotically the embedded soliton, while when the perturbation reduces the momentum of the embedded soliton, the perturbed state decays into radiation. Classes of initial conditions to induce soliton decay or persistence are also determined. Our analytical results are confirmed by direct numerical simulations of the fifth-order KdV equation. © 2002 Elsevier Science B.V. All rights reserved.",
"In dispersive wave systems with dispersion relations such that the phase speed attains an extremum at a finite wavenumber, a rich variety of solitary waves that feature decaying oscillatory tails i...",
"Steady solutions of a generalized Korteweg-de Vries equation which has an additional fifth order derivative term are investigated on the basis of a numerical calculation. It is found that either compressive or rarefactive solitary waves are possible to exist according as the dispersion is negative or positive and that the solitary waves take the oscillatory structure when the coefficient of the fifth order derivative term dominates over that of the third order one."
]
} |
1606.09376 | 2470236538 | Motivated by a growing market that involves buying and selling data over the web, we study pricing schemes that assign value to queries issued over a database. Previous work studied pricing mechanisms that compute the price of a query by extending a data seller's explicit prices on certain queries, or investigated the properties that a pricing function should exhibit without detailing a generic construction. In this work, we present a formal framework for pricing queries over data that allows the construction of general families of pricing functions, with the main goal of avoiding arbitrage. We consider two types of pricing schemes: instance-independent schemes, where the price depends only on the structure of the query, and answer-dependent schemes, where the price also depends on the query output. Our main result is a complete characterization of the structure of pricing functions in both settings, by relating it to properties of a function over a lattice. We use our characterization, together with information-theoretic methods, to construct a variety of arbitrage-free pricing functions. Finally, we discuss various tradeoffs in the design space and present techniques for efficient computation of the proposed pricing functions. | The problem of data pricing has been studied from a wide range of perspectives, including online markets and privacy @cite_14 @cite_7 . @cite_19 examined a variety of issues involved in pricing of information products and presented an economic approach to design of optimal pricing mechanism for online services. @cite_22 introduced the challenge of developing pricing functions in the context of cloud-based environments, where users can pay for queries without buying the entire dataset. This work also outlines various research challenges, such as enabling fine-grained pricing and developing efficient and fair pricing models for cloud-based markets. | {
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_22",
"@cite_7"
],
"mid": [
"2025094765",
"2951011752",
"2294768397",
"2096870293"
],
"abstract": [
"Online information servers that provide access to diverse databases where users can search for, browse through, and download the information they need have been rapidly increasing in number in the past few years. Online vendors have traditionally charged users for information on the based on the length of the time they were connected to the databases. With hardware and software advances, many online servers have recently started changing their pricing strategies to search-based and or subscription-fee pricing. This paper examines the various issues involved in pricing these information products, and presents an economic approach to analyze conditions under which the various pricing schemes may prove optimal for the online servers. Our results show that the variation in consumer expertise and valuation of information affects the choice of a pricing strategy by the server. We present general conditions under which subscription-fee pricing is optimal even when consumer demand is inelastic. We also find that, given the cost structures characterizing the market, undifferentiated online servers can compete and coexist in the market each making positive profits. We show that in a competitive setting an increase in costs of online servers can sometimes benefit them by enabling them to differentiate themselves. Our results offer insights into the trends in pricing strategies and may provide an explanation as to why many servers may persist with connect-time strategies.",
"We present an approach to differentially private computation in which one does not scale up the magnitude of noise for challenging queries, but rather scales down the contributions of challenging records. While scaling down all records uniformly is equivalent to scaling up the noise magnitude, we show that scaling records non-uniformly can result in substantially higher accuracy by bypassing the worst-case requirements of differential privacy for the noise magnitudes. This paper details the data analysis platform wPINQ, which generalizes the Privacy Integrated Query (PINQ) to weighted datasets. Using a few simple operators (including a non-uniformly scaling Join operator) wPINQ can reproduce (and improve) several recent results on graph analysis and introduce new generalizations (e.g., counting triangles with given degrees). We also show how to integrate probabilistic inference techniques to synthesize datasets respecting more complicated (and less easily interpreted) measurements.",
"Cloud-computing is transforming many aspects of data management. Most recently, the cloud is seeing the emergence of digital markets for data and associated services. We observe that our community has a lot to oer in building successful cloud-based data markets. We outline some of the key challenges that such markets face and discuss the associated research problems that our community can help solve.",
"We report on the design and implementation of the Privacy Integrated Queries (PINQ) platform for privacy-preserving data analysis. PINQ provides analysts with a programming interface to unscrubbed data through a SQL-like language. At the same time, the design of PINQ's analysis language and its careful implementation provide formal guarantees of differential privacy for any and all uses of the platform. PINQ's unconditional structural guarantees require no trust placed in the expertise or diligence of the analysts, substantially broadening the scope for design and deployment of privacy-preserving data analysis, especially by non-experts."
]
} |
1606.09376 | 2470236538 | Motivated by a growing market that involves buying and selling data over the web, we study pricing schemes that assign value to queries issued over a database. Previous work studied pricing mechanisms that compute the price of a query by extending a data seller's explicit prices on certain queries, or investigated the properties that a pricing function should exhibit without detailing a generic construction. In this work, we present a formal framework for pricing queries over data that allows the construction of general families of pricing functions, with the main goal of avoiding arbitrage. We consider two types of pricing schemes: instance-independent schemes, where the price depends only on the structure of the query, and answer-dependent schemes, where the price also depends on the query output. Our main result is a complete characterization of the structure of pricing functions in both settings, by relating it to properties of a function over a lattice. We use our characterization, together with information-theoretic methods, to construct a variety of arbitrage-free pricing functions. Finally, we discuss various tradeoffs in the design space and present techniques for efficient computation of the proposed pricing functions. | The first formal framework for query-based data pricing was introduced by @cite_15 . The authors define the notion of arbitrage, and provide a framework that takes a set of fixed prices for views over the data identified by seller, and extends these price points to a pricing function over any query. The authors also show that evaluation of the prices can be done efficiently in polynomial time for specific classes of conjunctive queries and a restricted set of views that include only selections. Subsequently, the authors demonstrated how the framework can be implemented into a prototype pricing system called QueryMarket @cite_1 @cite_26 . Further work @cite_4 discusses the pricing and complexity of pricing for the class of aggregate queries. The work by Lin and Kifer @cite_16 proposes several possible forms of arbitrage violations and integrates them into a single framework. The authors allow the queries to be randomized, and propose two potential pricing functions that are arbitrage-free across all forms. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_1",
"@cite_15",
"@cite_16"
],
"mid": [
"",
"2399897640",
"2065504152",
"2168402580",
"2272515049"
],
"abstract": [
"",
"In a data marketplace, producers of data provide query answers to consumers in exchange for payment. A market for data allows capital to flow from data consumers to data producers. This helps to finance the costs of data collection and dissemination, thereby increasing the availability of data throughout society. While nascent data markets exist on the Web, they are currently limited in the pricing mechanisms and interactions they support. In this paper we propose new criteria for interactive pricing in a data market: price functions should be non-disclosive, arbitrage-free, and regret-free. We study the structure of price functions meeting these criteria, and distinguish between NP-complete and PTIME cases for computing a conforming price function.",
"Increasingly data is being bought and sold online. To facilitate such transactions, online data market-places have emerged to provide a service for sellers to price views on their data, and buyers to buy such views. These marketplaces neither support the sale of ad-hoc queries (that are not one of the specified views), nor do they support queries that join datasets. We present QueryMarket, a prototype data marketplace that automatically extrapolates prices to ad-hoc queries, including those with joins, from the manually priced views. We call this capability \"query-based pricing\" and describe how it is superior to existing pricing methods, and how it provides more flexible pricing for the sellers. We then show how QueryMarket implements query-based pricing and how it generates explanations for the prices it computes.",
"Data is increasingly being bought and sold online, and Web-based marketplace services have emerged to facilitate these activities. However, current mechanisms for pricing data are very simple: buyers can choose only from a set of explicit views, each with a specific price. In this paper, we propose a framework for pricing data on the Internet that, given the price of a few views, allows the price of any query to be derived automatically. We call this capability \"query-based pricing.\" We first identify two important properties that the pricing function must satisfy, called arbitrage-free and discount-free. Then, we prove that there exists a unique function that satisfies these properties and extends the seller's explicit prices to all queries. When both the views and the query are Unions of Conjunctive Queries, the complexity of computing the price is high. To ensure tractability, we restrict the explicit prices to be defined only on selection views (which is the common practice today). We give an algorithm with polynomial time data complexity for computing the price of any chain query by reducing the problem to network flow. Furthermore, we completely characterize the class of Conjunctive Queries without self-joins that have PTIME data complexity (this class is slightly larger than chain queries), and prove that pricing all other queries is NP-complete, thus establishing a dichotomy on the complexity of the pricing problem when all views are selection queries.",
"Data is a commodity. Recent research has considered the mathematical problem of setting prices for different queries over data. Ideal pricing functions need to be flexible --defined for arbitrary queries (select-project-join, aggregate, random sample, and noisy privacy-preserving queries). They should be fine-grained -- a consumer should not be required to buy the entire database to get answers to simple \"low-information\" queries (such as selecting only a few tuples or aggregating over only one attribute). Similarly, a consumer may not want to pay a large amount of money, only to discover that the database is empty. Finally, pricing functions should satisfy consistency conditions such as being \"arbitrage-free\" -- consumers should not be able to circumvent the pricing function by deducing the answer to an expensive query from a few cheap queries. Previously proposed pricing functions satisfy some of these criteria (i.e. they are defined for restricted subclasses of queries and or use relaxed conditions for avoiding arbitrage). In this paper, we study arbitrage-free pricing functions defined for arbitrary queries. We propose new necessary conditions for avoiding arbitrage and provide new arbitrage-free pricing functions. We also prove several negative results related to the tension between flexible pricing and avoiding arbitrage, and show how this tension often results in unreasonable prices."
]
} |
1606.09376 | 2470236538 | Motivated by a growing market that involves buying and selling data over the web, we study pricing schemes that assign value to queries issued over a database. Previous work studied pricing mechanisms that compute the price of a query by extending a data seller's explicit prices on certain queries, or investigated the properties that a pricing function should exhibit without detailing a generic construction. In this work, we present a formal framework for pricing queries over data that allows the construction of general families of pricing functions, with the main goal of avoiding arbitrage. We consider two types of pricing schemes: instance-independent schemes, where the price depends only on the structure of the query, and answer-dependent schemes, where the price also depends on the query output. Our main result is a complete characterization of the structure of pricing functions in both settings, by relating it to properties of a function over a lattice. We use our characterization, together with information-theoretic methods, to construct a variety of arbitrage-free pricing functions. Finally, we discuss various tradeoffs in the design space and present techniques for efficient computation of the proposed pricing functions. | Data pricing is tightly connected to differential privacy @cite_3 . Ghosh and Roth @cite_23 study the buying and selling of data by considering privacy as an entity. Their framework compensates the seller for the loss of privacy due to selling of private data. A similar approach to pricing in the context of privacy is discussed in @cite_8 . | {
"cite_N": [
"@cite_23",
"@cite_3",
"@cite_8"
],
"mid": [
"1965093697",
"2077217970",
"1989291641"
],
"abstract": [
"We study markets for private data using differential privacy. We consider a setting in which a data analyst wishes to buy information from a population from which he can estimate some statistic. The analyst wishes to obtain an accurate estimate cheaply, while the owners of the private data experience some cost for their loss of privacy. Agents are rational and we wish to design truthful mechanisms.",
"In the information realm, loss of privacy is usually associated with failure to control access to information, to control the flow of information, or to control the purposes for which information is employed. Differential privacy arose in a context in which ensuring privacy is a challenge even if all these control problems are solved: privacy-preserving statistical analysis of data. The problem of statistical disclosure control – revealing accurate statistics about a set of respondents while preserving the privacy of individuals – has a venerable history, with an extensive literature spanning statistics, theoretical computer science, security, databases, and cryptography (see, for example, the excellent survey [1], the discussion of related work in [2] and the Journal of Official Statistics 9 (2), dedicated to confidentiality and disclosure control). This long history is a testament the importance of the problem. Statistical databases can be of enormous social value; they are used for apportioning resources, evaluating medical therapies, understanding the spread of disease, improving economic utility, and informing us about ourselves as a species. The data may be obtained in diverse ways. Some data, such as census, tax, and other sorts of official data, are compelled; others are collected opportunistically, for example, from traffic on the internet, transactions on Amazon, and search engine query logs; other data are provided altruistically, by respondents who hope that sharing their information will help others to avoid a specific misfortune, or more generally, to increase the public good. Altruistic data donors are typically promised their individual data will be kept confidential – in short, they are promised “privacy.” Similarly, medical data and legally compelled data, such as census data, tax return data, have legal privacy mandates. In our view, ethics demand that opportunistically obtained data should be treated no differently, especially when there is no reasonable alternative to engaging in the actions that generate the data in question. The problems remain: even if data encryption, key management, access control, and the motives of the data curator",
"Personal data has value to both its owner and to institutions who would like to analyze it. Privacy mechanisms protect the owner's data while releasing to analysts noisy versions of aggregate query results. But such strict protections of the individual's data have not yet found wide use in practice. Instead, Internet companies, for example, commonly provide free services in return for valuable sensitive information from users, which they exploit and sometimes sell to third parties. As awareness of the value of personal data increases, so has the drive to compensate the end-user for her private information. The idea of monetizing private data can improve over the narrower view of hiding private data, since it empowers individuals to control their data through financial means. In this article we propose a theoretical framework for assigning prices to noisy query answers as a function of their accuracy, and for dividing the price amongst data owners who deserve compensation for their loss of privacy. Our framework adopts and extends key principles from both differential privacy and query pricing in data markets. We identify essential properties of the pricing function and micropayments, and characterize valid solutions."
]
} |
1606.09002 | 2464918637 | Recently, scene text detection has become an active research topic in computer vision and document analysis, because of its great importance and significant challenge. However, vast majority of the existing methods detect text within local regions, typically through extracting character, word or line level candidates followed by candidate aggregation and false positive elimination, which potentially exclude the effect of wide-scope and long-range contextual cues in the scene. To take full advantage of the rich information available in the whole natural image, we propose to localize text in a holistic manner, by casting scene text detection as a semantic segmentation problem. The proposed algorithm directly runs on full images and produces global, pixel-wise prediction maps, in which detections are subsequently formed. To better make use of the properties of text, three types of information regarding text region, individual characters and their relationship are estimated, with a single Fully Convolutional Network (FCN) model. With such predictions of text properties, the proposed algorithm can simultaneously handle horizontal, multi-oriented and curved text in real-world natural images. The experiments on standard benchmarks, including ICDAR 2013, ICDAR 2015 and MSRA-TD500, demonstrate that the proposed algorithm substantially outperforms previous state-of-the-art approaches. Moreover, we report the first baseline result on the recently-released, large-scale dataset COCO-Text. | Stroke Width Transform (SWT) @cite_49 , Maximally Stable Extremal Regions (MSER) @cite_20 as well as their variants @cite_50 @cite_72 @cite_14 @cite_18 have been the mainstream in scene text detection. These methods generally hunt character candidates via edge detection or extreme region extraction. Different from such component-based approaches, Neumann and Matas @cite_55 proposed to seek character strokes in a multi-scale sliding-window manner. Zhang @cite_42 presented a text detector that makes use of the local symmetry property of character groups. The work of Jaderberg @cite_33 adopted object proposal and regression techniques to spot words in natural images, drawing inspiration from R-CNN @cite_52 . However, a common issue with these methods is that they all only use cues from local regions (up to text line level) in text detection. In contrast, the proposed algorithm makes decision in a much wider scope (up to whole image), and thus is able to take advantage of both short-range and long-range information in the images, which can be very useful in suppressing false alarms in complex scenes. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_33",
"@cite_55",
"@cite_42",
"@cite_52",
"@cite_72",
"@cite_50",
"@cite_49",
"@cite_20"
],
"mid": [
"",
"",
"2952250911",
"2056435187",
"1935817682",
"2102605133",
"",
"1972065312",
"2142159465",
"1488125194"
],
"abstract": [
"",
"",
"In this work we present an end-to-end system for text spotting -- localising and recognising text in natural scene images -- and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.",
"An unconstrained end-to-end text localization and recognition method is presented. The method introduces a novel approach for character detection and recognition which combines the advantages of sliding-window and connected component methods. Characters are detected and recognized as image regions which contain strokes of specific orientations in a specific relative position, where the strokes are efficiently detected by convolving the image gradient field with a set of oriented bar filters. Additionally, a novel character representation efficiently calculated from the values obtained in the stroke detection phase is introduced. The representation is robust to shift at the stroke level, which makes it less sensitive to intra-class variations and the noise induced by normalizing character size and positioning. The effectiveness of the representation is demonstrated by the results achieved in the classification of real-world characters using an euclidian nearest-neighbor classifier trained on synthetic data in a plain form. The method was evaluated on a standard dataset, where it achieves state-of-the-art results in both text localization and recognition.",
"Recently, a variety of real-world applications have triggered huge demand for techniques that can extract textual information from natural scenes. Therefore, scene text detection and recognition have become active research topics in computer vision. In this work, we investigate the problem of scene text detection from an alternative perspective and propose a novel algorithm for it. Different from traditional methods, which mainly make use of the properties of single characters or strokes, the proposed algorithm exploits the symmetry property of character groups and allows for direct extraction of text lines from natural images. The experiments on the latest ICDAR benchmarks demonstrate that the proposed algorithm achieves state-of-the-art performance. Moreover, compared to conventional approaches, the proposed algorithm shows stronger adaptability to texts in challenging scenarios.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"",
"With the increasing popularity of practical vision systems and smart phones, text detection in natural scenes becomes a critical yet challenging task. Most existing methods have focused on detecting horizontal or near-horizontal texts. In this paper, we propose a system which detects texts of arbitrary orientations in natural images. Our algorithm is equipped with a two-level classification scheme and two sets of features specially designed for capturing both the intrinsic characteristics of texts. To better evaluate our algorithm and compare it with other competing algorithms, we generate a new dataset, which includes various texts in diverse real-world scenarios; we also propose a protocol for performance evaluation. Experiments on benchmark datasets and the proposed dataset demonstrate that our algorithm compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on texts of arbitrary orientations in complex natural scenes.",
"We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.",
"A general method for text localization and recognition in real-world images is presented. The proposed method is novel, as it (i) departs from a strict feed-forward pipeline and replaces it by a hypothesesverification framework simultaneously processing multiple text line hypotheses, (ii) uses synthetic fonts to train the algorithm eliminating the need for time-consuming acquisition and labeling of real-world training data and (iii) exploits Maximally Stable Extremal Regions (MSERs) which provides robustness to geometric and illumination conditions. The performance of the method is evaluated on two standard datasets. On the Char74k dataset, a recognition rate of 72 is achieved, 18 higher than the state-of-the-art. The paper is first to report both text detection and recognition results on the standard and rather challenging ICDAR 2003 dataset. The text localization works for number of alphabets and the method is easily adapted to recognition of other scripts, e.g. cyrillics."
]
} |
1606.07965 | 2950567764 | This paper addresses the problem of summarizing decisions in spoken meetings: our goal is to produce a concise decision abstract for each meeting decision. We explore and compare token-level and dialogue act-level automatic summarization methods using both unsupervised and supervised learning frameworks. In the supervised summarization setting, and given true clusterings of decision-related utterances, we find that token-level summaries that employ discourse context can approach an upper bound for decision abstracts derived directly from dialogue acts. In the unsupervised summarization setting,we find that summaries based on unsupervised partitioning of decision-related utterances perform comparably to those based on partitions generated using supervised techniques (0.22 ROUGE-F1 using LDA-based topic models vs. 0.23 using SVMs). | There exists much previous research on automatic text summarization using corpus-based, knowledge-based or statistical methods @cite_11 @cite_9 . Dialogue summarization methods, however, generally try to account for the special characteristics of speech. Among early work in this subarea, investigates speech summarization based on maximal marginal relevance (MMR) and cross-speaker linking of information. Popular supervised methods for summarizing speech --- including maximum entropy, conditional random fields (CRFs), and support vector machines (SVMs) --- are investigated in , and . Techniques for determining semantic similarity are used for selecting relevant utterances in . Studies in show that decisions are considered to be one of the most important outputs of meetings. And in recent years, there has been much research on detecting decision-related DAs. , for example, propose maximum entropy classification techniques to identify DRDAs in meetings; develop a model of decision-making dialogue structure and detect decision DAs based on it; and implement a real-time decision detection system. | {
"cite_N": [
"@cite_9",
"@cite_11"
],
"mid": [
"1597655096",
"1594485982"
],
"abstract": [
"From the Publisher: Until now, most discourse researchers have assumed that full semantic understanding is necessary to derive the discourse structure of texts. This book documents the first serious attempt to construct automatically and use nonsemantic computational structures for text summarization. Daniel Marcu develops a semantics-free theoretical framework that is both general enough to be applicable to naturally occurring texts and concise enough to facilitate an algorithmic approach to discourse analysis. He presents and evaluates two discourse parsing methods: one uses manually written rules that reflect common patterns of usage of cue phrases such as \"however\" and \"in addition to\"; the other uses rules that are learned automatically from a corpus of discourse structures. By means of a psycholinguistic experiment, Marcu demonstrates how a discourse-based summarizer identifies the most important parts of texts at levels of performance that are close to those of humans. Marcu also discusses how the automatic derivation of discourse structures may be used to improve the performance of current natural language generation, machine translation, summarization, question answering, and information retrieval systems.",
"From the Publisher: Text summarization is the process of distilling the most important information from a source to produce an abridged version for a particular user or task.. \"Until now there has been no state-of-the-art collection of the most important writings in automatic text summarization. This book presents the key developments in the field in an integrated framework and suggests future research areas. The book is organized into six sections. Classical Approaches, Corpus-Based Approaches, Exploiting Discourse Structure, Knowledge-Rich Approaches, Evaluation Methods, and New Summarization Problem Areas."
]
} |
1606.08406 | 2463691904 | We tackle the blog recommendation problem in Tumblr for mobile users in this paper. Blog recommendation is challenging since most mobile users would suffer from the cold start when there are only a limited number of blogs followed by the user. Specifically to address this problem in the mobile domain, we take into account mobile apps, which typically provide rich information from the users. Based on the assumption that the user interests can be reflected from their app usage patterns, we propose to exploit the app usage data for improving blog recommendation. Building on the state-of-the-art recommendation framework, Factorization Machines (FM), we implement app-based FM that integrates app usage data with the user-blog follow relations. In this approach the blog recommendation is generated not only based on the blogs that the user followed before, but also the apps that the user has often used. We demonstrate in a series of experiments that app-based FM can outperform other alternative approaches to a significant extent. Our experimental results also show that exploiting app usage information is particularly effective for improving blog recommendation quality for cold start users. | Blog recommendation in Tumblr falls into the intersection between recommender systems and social networks, a particular area known as @cite_3 @cite_4 @cite_8 . Great efforts have been devoted to social recommendation in the research community, in which conventional collaborative filtering approaches, such as matrix factorization and random walks, are extended to incorporate the information from social networks for further improving recommendation performance @cite_13 @cite_18 . As an industry practice, our work is close to the work reported from other social network companies, such as LinkedIn feed recommendation @cite_1 @cite_22 and follower recommendation at Twitter @cite_11 . The difference lies in that 1) we target a new problem domain, i.e., blog recommendation in Tumblr; and 2) we focus on leveraging external auxiliary information to address the cold start problem. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_13",
"@cite_11"
],
"mid": [
"2144487656",
"2100755716",
"1978196964",
"2008886893",
"1981379119",
"1965526274",
"1976320242",
"19684845"
],
"abstract": [
"Although Recommender Systems have been comprehensively analyzed in the past decade, the study of social-based recommender systems just started. In this paper, aiming at providing a general method for improving recommender systems by incorporating social network information, we propose a matrix factorization framework with social regularization. The contributions of this paper are four-fold: (1) We elaborate how social network information can benefit recommender systems; (2) We interpret the differences between social-based recommender systems and trust-aware recommender systems; (3) We coin the term Social Regularization to represent the social constraints on recommender systems, and we systematically illustrate how to design a matrix factorization objective function with social regularization; and (4) The proposed method is quite general, which can be easily extended to incorporate other contextual information, like social tags, etc. The empirical analysis on two large datasets demonstrates that our approaches outperform other state-of-the-art methods.",
"Recommendation plays an increasingly important role in our daily lives. Recommender systems automatically suggest to a user items that might be of interest to her. Recent studies demonstrate that information from social networks can be exploited to improve accuracy of recommendations. In this paper, we present a survey of collaborative filtering (CF) based social recommender systems. We provide a brief overview over the task of recommender systems and traditional approaches that do not use social network information. We then present how social network information can be adopted by recommender systems as additional input for improved accuracy. We classify CF-based social recommender systems into two categories: matrix factorization based social recommendation approaches and neighborhood based social recommendation approaches. For each category, we survey and compare several representative algorithms.",
"LinkedIn dynamically delivers update activities from a user's interpersonal network to more than 300 million members in the personalized feed that ranks activities according their \"relevance\" to the user. This paper discloses the implementation details behind this personalized feed system at LinkedIn which can not be found from related work, and addresses the scalability and data sparsity challenges for deploying the system online. More specifically, we focus on the personalization models by generating three kinds of affinity scores: Viewer-ActivityType Affinity, Viewer-Actor Affinity, and Viewer-Actor-ActivityType Affinity. Extensive experiments based on online bucket tests (A B experiments) and offline evaluation illustrate the effect of our personalization models in LinkedIn feed.",
"Over the past two decades, a large amount of research effort has been devoted to developing algorithms that generate recommendations. The resulting research progress has established the importance of the user-item (U-I) matrix, which encodes the individual preferences of users for items in a collection, for recommender systems. The U-I matrix provides the basis for collaborative filtering (CF) techniques, the dominant framework for recommender systems. Currently, new recommendation scenarios are emerging that offer promising new information that goes beyond the U-I matrix. This information can be divided into two categories related to its source: rich side information concerning users and items, and interaction information associated with the interplay of users and items. In this survey, we summarize and analyze recommendation scenarios involving information sources and the CF algorithms that have been recently developed to address them. We provide a comprehensive introduction to a large body of research, more than 200 key references, with the aim of supporting the further development of recommender systems exploiting information beyond the U-I matrix. On the basis of this material, we identify and discuss what we see as the central challenges lying ahead for recommender system technology, both in terms of extensions of existing techniques as well as of the integration of techniques and technologies drawn from other research areas.",
"Users on an online social network site generate a large number of heterogeneous activities, ranging from connecting with other users, to sharing content, to updating their profiles. The set of activities within a user's network neighborhood forms a stream of updates for the user's consumption. In this paper, we report our experience with the problem of ranking activities in the LinkedIn homepage feed. In particular, we provide a taxonomy of social network activities, describe a system architecture (with a number of key components open-sourced) that supports fast iteration in model development, demonstrate a number of key factors for effective ranking, and report experimental results from extensive online bucket tests.",
"In recent years, with the proliferation of the social web, users are increasingly exposed to social overload and the designers of social web sites are challenged to attract and retain their user basis. Social recommender systems are becoming an integral part of virtually any leading website, playing a key factor in its success: First, they aim to address the overload problem by helping users to find relevant content. Second, they can provide recommendations for content creation, increasing participation and user retention. In this tutorial, we will review the broad domain of social recommender systems, their application for the social web, the underlying techniques and methodologies; the data in use, recommended entities, and target population; evaluation techniques; and open issues and challenges.",
"Social network systems, like last.fm, play a significant role in Web 2.0, containing large amounts of multimedia-enriched data that are enhanced both by explicit user-provided annotations and implicit aggregated feedback describing the personal preferences of each user. It is also a common tendency for these systems to encourage the creation of virtual networks among their users by allowing them to establish bonds of friendship and thus provide a novel and direct medium for the exchange of data. We investigate the role of these additional relationships in developing a track recommendation system. Taking into account both the social annotation and friendships inherent in the social graph established among users, items and tags, we created a collaborative recommendation system that effectively adapts to the personal information needs of each user. We adopt the generic framework of Random Walk with Restarts in order to provide with a more natural and efficient way to represent social networks. In this work we collected a representative enough portion of the music social network last.fm, capturing explicitly expressed bonds of friendship of the user as well as social tags. We performed a series of comparison experiments between the Random Walk with Restarts model and a user-based collaborative filtering method using the Pearson Correlation similarity. The results show that the graph model system benefits from the additional information embedded in social knowledge. In addition, the graph model outperforms the standard collaborative filtering method.",
"WTF (\"Who to Follow\") is Twitter's user recommendation service, which is responsible for creating millions of connections daily between users based on shared interests, common connections, and other related factors. This paper provides an architectural overview and shares lessons we learned in building and running the service over the past few years. Particularly noteworthy was our design decision to process the entire Twitter graph in memory on a single server, which significantly reduced architectural complexity and allowed us to develop and deploy the service in only a few months. At the core of our architecture is Cassovary, an open-source in-memory graph processing engine we built from scratch for WTF. Besides powering Twitter's user recommendations, Cassovary is also used for search, discovery, promoted products, and other services as well. We describe and evaluate a few graph recommendation algorithms implemented in Cassovary, including a novel approach based on a combination of random walks and SALSA. Looking into the future, we revisit the design of our architecture and comment on its limitations, which are presently being addressed in a second-generation system under development."
]
} |
1606.08406 | 2463691904 | We tackle the blog recommendation problem in Tumblr for mobile users in this paper. Blog recommendation is challenging since most mobile users would suffer from the cold start when there are only a limited number of blogs followed by the user. Specifically to address this problem in the mobile domain, we take into account mobile apps, which typically provide rich information from the users. Based on the assumption that the user interests can be reflected from their app usage patterns, we propose to exploit the app usage data for improving blog recommendation. Building on the state-of-the-art recommendation framework, Factorization Machines (FM), we implement app-based FM that integrates app usage data with the user-blog follow relations. In this approach the blog recommendation is generated not only based on the blogs that the user followed before, but also the apps that the user has often used. We demonstrate in a series of experiments that app-based FM can outperform other alternative approaches to a significant extent. Our experimental results also show that exploiting app usage information is particularly effective for improving blog recommendation quality for cold start users. | The recommendation framework we build on in this work is (FM) @cite_12 , which is a particular class of latent factor models. Latent factor models have shown their effectiveness for improving recommendation performance through several public competitions in the past years, such as in Netflix Prize competition @cite_5 and KDD CUP @cite_20 . The core concept of latent factor models is to learn low rank representations of users and items so that they are indicative for users' preference to items. FM was proposed as a generalized latent factor model, one of its key advantages being able to incorporate additional information sources in a uniform fashion @cite_12 @cite_2 . Since its inception, FM has been widely recognized and adopted in a number of use cases, such as context-aware recommendation @cite_21 and cross-domain recommendation @cite_7 . Our blog recommendation system is built on FM that allows us to exploit auxiliary information sources in addition to the user-blog interactions. However, we implemented FM in a distributed fashion based on our internal infrastructure so that it can scale to very large datasets. Moreover, our focus is application-specific for blog recommendation, while not on improving the FM modeling framework. | {
"cite_N": [
"@cite_7",
"@cite_21",
"@cite_2",
"@cite_5",
"@cite_12",
"@cite_20"
],
"mid": [
"",
"2002834872",
"2247380138",
"2054141820",
"2094286023",
"21207210"
],
"abstract": [
"",
"The situation in which a choice is made is an important information for recommender systems. Context-aware recommenders take this information into account to make predictions. So far, the best performing method for context-aware rating prediction in terms of predictive accuracy is Multiverse Recommendation based on the Tucker tensor factorization model. However this method has two drawbacks: (1) its model complexity is exponential in the number of context variables and polynomial in the size of the factorization and (2) it only works for categorical context variables. On the other hand there is a large variety of fast but specialized recommender methods which lack the generality of context-aware methods. We propose to apply Factorization Machines (FMs) to model contextual information and to provide context-aware rating predictions. This approach results in fast context-aware recommendations because the model equation of FMs can be computed in linear time both in the number of context variables and the factorization size. For learning FMs, we develop an iterative optimization method that analytically finds the least-square solution for one parameter given the other ones. Finally, we show empirically that our approach outperforms Multiverse Recommendation in prediction quality and runtime.",
"The most common approach in predictive modeling is to describe cases with feature vectors (aka design matrix). Many machine learning methods such as linear regression or support vector machines rely on this representation. However, when the underlying data has strong relational patterns, especially relations with high cardinality, the design matrix can get very large which can make learning and prediction slow or even infeasible. This work solves this issue by making use of repeating patterns in the design matrix which stem from the underlying relational structure of the data. It is shown how coordinate descent learning and Bayesian Markov Chain Monte Carlo inference can be scaled for linear regression and factorization machine models. Empirically, it is shown on two large scale and very competitive datasets (Netflix prize, KDDCup 2012), that (1) standard learning algorithms based on the design matrix representation cannot scale to relational predictor variables, (2) the proposed new algorithms scale and (3) the predictive quality of the proposed generic feature-based approach is as good as the best specialized models that have been tailored to the respective tasks.",
"As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.",
"Factorization approaches provide high accuracy in several important prediction problems, for example, recommender systems. However, applying factorization approaches to a new prediction problem is a nontrivial task and requires a lot of expert knowledge. Typically, a new model is developed, a learning algorithm is derived, and the approach has to be implemented. Factorization machines (FM) are a generic approach since they can mimic most factorization models just by feature engineering. This way, factorization machines combine the generality of feature engineering with the superiority of factorization models in estimating interactions between categorical variables of large domain. libFM is a software implementation for factorization machines that features stochastic gradient descent (SGD) and alternating least-squares (ALS) optimization, as well as Bayesian inference using Markov Chain Monto Carlo (MCMC). This article summarizes the recent research on factorization machines both in terms of modeling and learning, provides extensions for the ALS and MCMC algorithms, and describes the software tool libFM .",
"In this paper we introduce SVDFeature, a machine learning toolkit for feature-based collaborative filtering. SVDFeature is designed to efficiently solve the feature-based matrix factorization. The feature-based setting allows us to build factorization models incorporating side information such as temporal dynamics, neighborhood relationship, and hierarchical information. The toolkit is capable of both rate prediction and collaborative ranking, and is carefully designed for efficient training on large-scale data set. Using this toolkit, we built solutions to win KDD Cup for two consecutive years."
]
} |
1606.08406 | 2463691904 | We tackle the blog recommendation problem in Tumblr for mobile users in this paper. Blog recommendation is challenging since most mobile users would suffer from the cold start when there are only a limited number of blogs followed by the user. Specifically to address this problem in the mobile domain, we take into account mobile apps, which typically provide rich information from the users. Based on the assumption that the user interests can be reflected from their app usage patterns, we propose to exploit the app usage data for improving blog recommendation. Building on the state-of-the-art recommendation framework, Factorization Machines (FM), we implement app-based FM that integrates app usage data with the user-blog follow relations. In this approach the blog recommendation is generated not only based on the blogs that the user followed before, but also the apps that the user has often used. We demonstrate in a series of experiments that app-based FM can outperform other alternative approaches to a significant extent. Our experimental results also show that exploiting app usage information is particularly effective for improving blog recommendation quality for cold start users. | Our work is also related to the research topic of cross-domain recommendation, which generally refers to the recommendation scenarios that involve multiple application domains @cite_8 . Prior work has shown that sharing knowledge between different product domains can bring up mutual benefits. For example, the users may have similar interest in movies and books @cite_15 . It is also shown that auxiliary information sources can be exploited as intermediate for linking different domains, e.g., tags may be common between the book domain and the movie domain @cite_0 , and users may have similar rating patterns across different product domains @cite_15 @cite_7 . We consider that external information sources are critical for addressing the cold start problem for blog recommendation. In particular, our work is specialized in the mobile domain, in which we exploit the rich user feedback from their interactions with mobile apps. Our assumption is that the apps that a user often uses can indicate her underlying interests, which can be leveraged for blog recommendation in Tumblr. | {
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_7",
"@cite_8"
],
"mid": [
"1968130046",
"143867266",
"",
"2008886893"
],
"abstract": [
"Recommender systems generally face the challenge of making predictions using only the relatively few user ratings available for a given domain. Cross-domain collaborative filtering (CF) aims to alleviate the effects of this data sparseness by transferring knowledge from other domains. We propose a novel algorithm, Tag-induced Cross-Domain Collaborative Filtering (TagCDCF), which exploits user-contributed tags that are common to multiple domains in order to establish the cross-domain links necessary for successful cross-domain CF. TagCDCF extends the state-of-the-art matrix factorization by introducing a constraint involving tag-based similarities between pairs of users and pairs of items across domains. The method requires no common users or items across domains. Using two publicly available CF data sets as different domains, we experimentally demonstrate that TagCDCF substantially outperforms other state-of-the-art single domain CF and cross-domain CF approaches. Additional experiments show that TagCDCF addresses data sparseness and illustrate the influence of the number of tags used by users in both domains.",
"The sparsity problem in collaborative filtering (CF) is a major bottleneck for most CF methods. In this paper, we consider a novel approach for alleviating the sparsity problem in CF by transferring useritem rating patterns from a dense auxiliary rating matrix in other domains (e.g., a popular movie rating website) to a sparse rating matrix in a target domain (e.g., a new book rating website). We do not require that the users and items in the two domains be identical or even overlap. Based on the limited ratings in the target matrix, we establish a bridge between the two rating matrices at a cluster-level of user-item rating patterns in order to transfer more useful knowledge from the auxiliary task domain. We first compress the ratings in the auxiliary rating matrix into an informative and yet compact cluster-level rating pattern representation referred to as a codebook. Then, we propose an efficient algorithm for reconstructing the target rating matrix by expanding the codebook. We perform extensive empirical tests to show that our method is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary tasks, as compared to many state-of-the-art CF methods.",
"",
"Over the past two decades, a large amount of research effort has been devoted to developing algorithms that generate recommendations. The resulting research progress has established the importance of the user-item (U-I) matrix, which encodes the individual preferences of users for items in a collection, for recommender systems. The U-I matrix provides the basis for collaborative filtering (CF) techniques, the dominant framework for recommender systems. Currently, new recommendation scenarios are emerging that offer promising new information that goes beyond the U-I matrix. This information can be divided into two categories related to its source: rich side information concerning users and items, and interaction information associated with the interplay of users and items. In this survey, we summarize and analyze recommendation scenarios involving information sources and the CF algorithms that have been recently developed to address them. We provide a comprehensive introduction to a large body of research, more than 200 key references, with the aim of supporting the further development of recommender systems exploiting information beyond the U-I matrix. On the basis of this material, we identify and discuss what we see as the central challenges lying ahead for recommender system technology, both in terms of extensions of existing techniques as well as of the integration of techniques and technologies drawn from other research areas."
]
} |
1606.07572 | 2751293983 | Although several RDF knowledge bases are available through the LOD initiative, the ontology schema of such linked datasets is not very rich. In particular, they lack object properties. The problem of finding new object properties (and their instances) between any two given classes has not been investigated in detail in the context of Linked Data. In this paper, we present DART (Detecting Arbitrary Relations for enriching T-Boxes of Linked Data) - an unsupervised solution to enrich the LOD cloud with new object properties between two given classes. DART exploits contextual similarity to identify text patterns from the web corpus that can potentially represent relations between individuals. These text patterns are then clustered by means of paraphrase detection to capture the object properties between the two given LOD classes. DART also performs fully automated mapping of the discovered relations to the properties in the linked dataset. This serves many purposes such as identification of completely new relations, elimination of irrelevant relations, and generation of prospective property axioms. We have empirically evaluated our approach on several pairs of classes and found that the system can indeed be used for enriching the linked datasets with new object properties and their instances. We compared DART with newOntExt system which is an offshoot of the NELL (Never-Ending Language Learning) effort. Our experiments reveal that DART gives better results than newOntExt with respect to both the correctness, as well as the number of relations. | Relation enrichment (of those other than the links) of the linked datasets for the purpose of the overall growth of the LOD cloud has been the major focus in many recent works (surveyed in @cite_36 ). Most of the relation enrichment approaches surveyed in @cite_36 focus on extracting more instances (subject-object pairs) of existing relations in the linked datasets. Works such as ( @cite_18 , @cite_7 ), @cite_10 , ( @cite_20 , @cite_9 , @cite_28 and @cite_26 ) and @cite_5 use the technique of interpreting web tables for this purpose and a few other works such as @cite_30 and @cite_2 propose using various semi-supervised approaches for the same. Distant supervision is another new paradigm which has been recently adopted by many works ( @cite_6 , @cite_25 , @cite_35 , @cite_22 , @cite_3 ) in order to extract more instances of existing relations. Distant supervision is the technique of utilizing a large number of known facts (from a huge linked dataset such as Freebase) for automatically labeling mentions of these facts in an unannotated text corpus, hence generating training data. A classifier is learnt based on this weakly labeled training data in order to classify unseen instances @cite_6 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_36",
"@cite_28",
"@cite_9",
"@cite_6",
"@cite_3",
"@cite_2",
"@cite_5",
"@cite_10",
"@cite_25",
"@cite_20"
],
"mid": [
"2167571757",
"2407215154",
"2070491211",
"64481094",
"37338847",
"",
"",
"",
"2402478263",
"",
"2251419334",
"1515300998",
"2111869785",
"2020022499",
"2107598941",
""
],
"abstract": [
"This paper presents SOFIE, a system for automated ontology extension. SOFIE can parse natural language documents, extract ontological facts from them and link the facts into an ontology. SOFIE uses logical reasoning on the existing knowledge and on the new knowledge in order to disambiguate words to their most probable meaning, to reason on the meaning of text patterns and to take into account world knowledge axioms. This allows SOFIE to check the plausibility of hypotheses and to avoid inconsistencies with the ontology. The framework of SOFIE unites the paradigms of pattern matching, word sense disambiguation and ontological reasoning in one unified model. Our experiments show that SOFIE delivers high-quality output, even from unstructured Internet documents.",
"DBpedia is a Semantic Web project aiming to extract structured data from Wikipedia articles. Due to the increasing number of resources linked to it, DBpedia plays a central role in the Linked Open Data community. Currently, the information contained in DBpedia is mainly collected from Wikipedia infoboxes, a set of subject-attribute-value triples that represents a summary of the Wikipedia page. These infoboxes are manually compiled by the Wikipedia contributors, and in more than 50 of the Wikipedia articles the infobox is missing. In this article, we use the distant supervision paradigm to extract the missing information directly from the Wikipedia article, using a Relation Extraction tool trained on the information already present in DBpedia. We evaluate our system on a data set consisting of seven DBpedia properties, demonstrating the suitability of the approach in extending the DBpedia coverage.",
"The tables embedded in Wikipedia articles contain rich, semi-structured encyclopaedic content. However, the cumulative content of these tables cannot be queried against. We thus propose methods to recover the semantics of Wikipedia tables and, in particular, to extract facts from them in the form of RDF triples. Our core method uses an existing Linked Data knowledge-base to find pre-existing relations between entities in Wikipedia tables, suggesting the same relations as holding for other entities in analogous columns on different rows. We find that such an approach extracts RDF triples from Wikipedia's tables at a raw precision of 40 . To improve the raw precision, we define a set of features for extracted triples that are tracked during the extraction phase. Using a manually labelled gold standard, we then test a variety of machine learning methods for classifying correct incorrect triples. One such method extracts 7.9 million unique and novel RDF triples from over one million Wikipedia tables at an estimated precision of 81.5 .",
"Vast amounts of information is available in structured forms like spreadsheets, database relations, and tables found in documents and on the Web. We describe an approach that uses linked data to interpret such tables and associate their components with nodes in a reference linked data collection. Our proposed framework assigns a class (i.e. type) to table columns, links table cells to entities, and inferred relations between columns to properties. The resulting interpretation can be used to annotate tables, confirm existing facts in the linked data collection, and propose new facts to be added. Our implemented prototype uses DBpedia as the linked data collection and Wikitology for background knowledge. We evaluated its performance using a collection of tables from Google Squared, Wikipedia and the Web.",
"Relation extraction is a key step in the problem of structuring natural language text. This paper demonstrates a multi-class classifier for relation extraction, constructed using the distant supervision approach, along with resources of the Semantic Web. In particular, the classifier uses a feature based on the class hierarchy of an ontology that, in conjunction with basic lexical features, improves accuracy and recall. The paper contains extensive experiments, using a corpus extracted from the Wikipedia and the DBpedia ontology, to demonstrate the usefulness of the new feature.",
"",
"",
"",
"We describe a framework and prototype system for interpreting tables and extracting entities and relations from them, and producing a linked data representation of the table's contents. This can be used to annotate the table or to add new facts to the linked data collection.",
"",
"In this paper, we extend distant supervision (DS) based on Wikipedia for Relation Extraction (RE) by considering (i) relations defined in external repositories, e.g. YAGO, and (ii) any subset of Wikipedia documents. We show that training data constituted by sentences containing pairs of named entities in target relations is enough to produce reliable supervision. Our experiments with state-of-the-art relation extraction models, trained on the above data, show a meaningful F1 of 74.29 on a manually annotated test set: this highly improves the state-of-art in RE using DS. Additionally, our end-to-end experiments demonstrated that our extractors can be applied to any general text document.",
"We present a large-scale relation extraction (RE) system which learns grammar-based RE rules from the Web by utilizing large numbers of relation instances as seed. Our goal is to obtain rule sets large enough to cover the actual range of linguistic variation, thus tackling the long-tail problem of real-world applications. A variant of distant supervision learns several relations in parallel, enabling a new method of rule filtering. The system detects both binary and n-ary relations. We target 39 relations from Freebase, for which 3M sentences extracted from 20M web pages serve as the basis for learning an average of 40K distinctive rules per relation. Employing an efficient dependency parser, the average run time for each relation is only 19 hours. We compare these rules with ones learned from local corpora of different sizes and demonstrate that the Web is indeed needed for a good coverage of linguistic variation.",
"Tables are a universal idiom to present relational data. Billions of tables on Web pages express entity references, attributes and relationships. This representation of relational world knowledge is usually considerably better than completely unstructured, free-format text. At the same time, unlike manually-created knowledge bases, relational information mined from \"organic\" Web tables need not be constrained by availability of precious editorial time. Unfortunately, in the absence of any formal, uniform schema imposed on Web tables, Web search cannot take advantage of these high-quality sources of relational information. In this paper we propose new machine learning techniques to annotate table cells with entities that they likely mention, table columns with types from which entities are drawn for cells in the column, and relations that pairs of table columns seek to express. We propose a new graphical model for making all these labeling decisions for each table simultaneously, rather than make separate local decisions for entities, types and relations. Experiments using the YAGO catalog, DB-Pedia, tables from Wikipedia, and over 25 million HTML tables from a 500 million page Web crawl uniformly show the superiority of our approach. We also evaluate the impact of better annotations on a prototype relational Web search tool. We demonstrate clear benefits of our annotations beyond indexing tables in a purely textual manner.",
"Millions of HTML tables containing structured data can be found on the Web. With their wide coverage, these tables are potentially very useful for filling missing values and extending cross-domain knowledge bases such as DBpedia, YAGO, or the Google Knowledge Graph. As a prerequisite for being able to use table data for knowledge base extension, the HTML tables need to be matched with the knowledge base, meaning that correspondences between table rows columns and entities schema elements of the knowledge base need to be found. This paper presents the T2D gold standard for measuring and comparing the performance of HTML table to knowledge base matching systems. T2D consists of 8 700 schema-level and 26 100 entity-level correspondences between the WebDataCommons Web Tables Corpus and the DBpedia knowledge base. In contrast related work on HTML table to knowledge base matching, the Web Tables Corpus (147 million tables), the knowledge base, as well as the gold standard are publicly available. The gold standard is used afterward to evaluate the performance of T2K Match, an iterative matching method which combines schema and instance matching. T2K Match is designed for the use case of matching large quantities of mostly small and narrow HTML tables against large cross-domain knowledge bases. The evaluation using the T2D gold standard shows that T2K Match discovers table-to-class correspondences with a precision of 94 , row-to-entity correspondences with a precision of 90 , and column-to-property correspondences with a precision of 77 .",
"Modern models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora. We investigate an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACE-style algorithms, and allowing the use of corpora of any size. Our experiments use Freebase, a large semantic database of several thousand relations, to provide distant supervision. For each pair of entities that appears in some Freebase relation, we find all sentences containing those entities in a large unlabeled corpus and extract textual features to train a relation classifier. Our algorithm combines the advantages of supervised IE (combining 400,000 noisy pattern features in a probabilistic classifier) and unsupervised IE (extracting large numbers of relations from large corpora of any domain). Our model is able to extract 10,000 instances of 102 relations at a precision of 67.6 . We also analyze feature performance, showing that syntactic parse features are particularly helpful for relations that are ambiguous or lexically distant in their expression.",
""
]
} |
1606.07572 | 2751293983 | Although several RDF knowledge bases are available through the LOD initiative, the ontology schema of such linked datasets is not very rich. In particular, they lack object properties. The problem of finding new object properties (and their instances) between any two given classes has not been investigated in detail in the context of Linked Data. In this paper, we present DART (Detecting Arbitrary Relations for enriching T-Boxes of Linked Data) - an unsupervised solution to enrich the LOD cloud with new object properties between two given classes. DART exploits contextual similarity to identify text patterns from the web corpus that can potentially represent relations between individuals. These text patterns are then clustered by means of paraphrase detection to capture the object properties between the two given LOD classes. DART also performs fully automated mapping of the discovered relations to the properties in the linked dataset. This serves many purposes such as identification of completely new relations, elimination of irrelevant relations, and generation of prospective property axioms. We have empirically evaluated our approach on several pairs of classes and found that the system can indeed be used for enriching the linked datasets with new object properties and their instances. We compared DART with newOntExt system which is an offshoot of the NELL (Never-Ending Language Learning) effort. Our experiments reveal that DART gives better results than newOntExt with respect to both the correctness, as well as the number of relations. | As more than half of the relations generated by OntExt were invalid (determined manually in @cite_34 ), the authors of OntExt have proposed a classifier which can differentiate between valid and invalid relations to some extent. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2123442489"
],
"abstract": [
"We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. This toolkit is quite widely used, both in the research NLP community and also among commercial and government users of open source NLP technology. We suggest that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage."
]
} |
1606.07572 | 2751293983 | Although several RDF knowledge bases are available through the LOD initiative, the ontology schema of such linked datasets is not very rich. In particular, they lack object properties. The problem of finding new object properties (and their instances) between any two given classes has not been investigated in detail in the context of Linked Data. In this paper, we present DART (Detecting Arbitrary Relations for enriching T-Boxes of Linked Data) - an unsupervised solution to enrich the LOD cloud with new object properties between two given classes. DART exploits contextual similarity to identify text patterns from the web corpus that can potentially represent relations between individuals. These text patterns are then clustered by means of paraphrase detection to capture the object properties between the two given LOD classes. DART also performs fully automated mapping of the discovered relations to the properties in the linked dataset. This serves many purposes such as identification of completely new relations, elimination of irrelevant relations, and generation of prospective property axioms. We have empirically evaluated our approach on several pairs of classes and found that the system can indeed be used for enriching the linked datasets with new object properties and their instances. We compared DART with newOntExt system which is an offshoot of the NELL (Never-Ending Language Learning) effort. Our experiments reveal that DART gives better results than newOntExt with respect to both the correctness, as well as the number of relations. | newOntExt which was developed based on OntExt had a few changes in its working @cite_13 : instead of considering all the words in between the two input instances as a pattern, newOntExt used ReVerb @cite_0 for extracting the patterns in order to reduce the number of noisy patterns obtained; for optimising the computational cost, a more elegant file structure was used for searching through the sentences; instead of considering every pair of categories as input to this system, reduced category groups of interest were formed to pick the input category pairs. | {
"cite_N": [
"@cite_0",
"@cite_13"
],
"mid": [
"2471366537",
"1987149482"
],
"abstract": [
"How do we scale information extraction to the massive size and unprecedented heterogeneity of the Web corpus? Beginning in 2003, our KnowItAll project has sought to extract high-quality knowledge from the Web. In 2007, we introduced the Open Information Extraction (Open IE) paradigm which eschews hand-labeled training examples, and avoids domain-specific verbs and nouns, to develop unlexicalized, domain-independent extractors that scale to the Web corpus. Open IE systems have extracted billions of assertions as the basis for both common-sense knowledge and novel question-answering systems. This paper describes the second generation of Open IE systems, which rely on a novel model of how relations and their arguments are expressed in English sentences to double precision recall compared with previous systems such as TEXTRUNNER and WOE.",
"NELL (Never Ending Language Learning system) is the first system to practice the Never-Ending Machine Learning paradigm techniques. It has an inactive component to continually extend its KB: OntExt. Its main idea is to identify and add to the KB new relations which are frequently asserted in huge text data. Co-occurrence matrices are used to structure the normalized values of co-occurrence between the contexts for each category pair to identify those context patterns. The clustering of each matrix is done with Weka K-means algorithm: from each cluster, a new possible relation. This work present newOntExt: a new approach with new features to turn the ontology extension task feasible to NELL. This approach has also an alternative task of naming new relations found by another NELL component: Prophet. The relations are classified as valid or invalid by humans; the precision is calculated for each experiment and the results are compared to those relative to OntExt. Initial results show that ontology extension with newOntExt can help Never-Ending Learning systems to expand its volume of beliefs and to keep learning with high precision by acting in auto-supervision and auto-reflection."
]
} |
1606.07915 | 2473026520 | An S-restricted composition of a positive integer n is an ordered partition of n where each summand is drawn from a given subset S of positive integers. There are various problems regarding such compositions which have received attention in recent years. This paper is an attempt at finding a closed- form formula for the number of S-restricted compositions of n. To do so, we reduce the problem to finding solutions to corresponding so-called interpreters which are linear homogeneous recurrence relations with constant coefficients. Then, we reduce interpreters to Diophantine equations. Such equations are not in general solvable. Thus, we restrict our attention to those S-restricted composition problems whose interpreters have a small number of coefficients, thereby leading to solvable Diophantine equations. The formalism developed is then used to study the integer sequences related to some well-known cases of the S-restricted composition problem. | Many applied problems involve variants of the integer composition problem where different kinds of constraints are placed on the structure or parts of the composition. We briefly review earlier research on integer compositions with such constraints. As the first example, we can refer to those considering palindromic compositions which read the same from the left and the right @cite_45 . Another example is the problem of locally restricted compositions in which every @math successive parts meet particular constraints @cite_10 @cite_36 @cite_26 . Compositions whose parts are from a given set and successive parts of a given length avoid particular patterns are also investigated in the literature @cite_41 @cite_2 . The research on the same has also led to the generating functions that yield a closed-form solution to the problem in special cases @cite_22 . Generating functions have also been derived for the number of compositions where the swap of two parts of the composition is not relevant @cite_46 . Compositions avoiding some partially ordered patterns have also been studied @cite_14 . Partially ordered patterns define a partial order relation on parts of a composition. | {
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_22",
"@cite_36",
"@cite_41",
"@cite_45",
"@cite_2",
"@cite_46",
"@cite_10"
],
"mid": [
"2146489966",
"2117480208",
"",
"",
"2083800620",
"",
"2076917081",
"",
"2096790857"
],
"abstract": [
"We study compositions of the integer @math in which the first part, successive differences, and the last part are constrained to lie in prescribed sets @math , respectively. A simple condition on @math guarantees that the generating function @math has only a simple pole on its circle of convergence and this at @math , a function independent of @math and @math . Thus the number of compositions is asymptotic to @math for a suitable constant @math . We prove a multivariate central and local limit theorem and apply it to various statistics of random locally restricted compositions of @math , such as number of parts, numbers of parts of given sizes, and number of rises. The first and last parts are shown to have limiting distributions and to be asymptotically independent. If @math has only finitely many positive elements @math , or finitely many negative elements @math , then the largest part and number of distinct part sizes are almost surely @math . On the other hand, when both @math and @math have a positive asymptotic lower \"local log-density\", we prove that the largest part and number of distinct part sizes are almost surely @math , and we give sufficient conditions for the largest part to be almost surely asymptotic to @math .",
"A partially ordered (generalized) pattern (POP) is a generalized pattern some of whose letters are incomparable, an extension of generalized permutation patterns introduced by Babson and Steingr´omsson. POPs were introduced in the symmetric group by Kitaev [19, 21], and studied in the set of k-ary words by Kitaev and Mansour [22]. Moreover, [23] introduced segmented POPs in compositions. In this paper, we study avoidance of POPs in compositions and generalize results for avoidance of POPs in permutations and words. Specifically, we obtain results for the generating functions for the number of compositions that avoid shuffle patterns and multi-patterns. In addition, we give the generating function for the distribution of the maximum number of nonoverlapping occurrences of a segmented POP τ (that is allowed to have repeated letters) among the compositions of n with m parts in a given set, provided we know the generating function for the number of compositions of n with m parts in the given set that avoid τ. This result is a q-analogue of the main result in [22].",
"",
"",
"We find generating functions for the number of compositions avoiding a single pattern or a pair of patterns of length three on the alphabet 1,2 and determine which of them are Wilf-equivalent on compositions. We also derive the number of permutations of a multiset which avoid these same patterns and determine the Wilf-equivalence of these patterns on permutations of multisets.",
"",
"We show that among the compositions of n into positive parts, the number g(n) that avoid a given pattern @p of three letters is independent of @p. We find the generating function of g(n) , and it shows that the sequence g(n) is not P-recursive. If S is a given multiset, we show that the number of permutations of S that avoid a pattern @p of three letters is independent of @p. Finally, we give a bijective proof of the fact that if M=1^a^\"^1...k^a^\"^k is a given multiset then the number of permutations of M that avoid the pattern (123) is a symmetric function of the multiplicities a\"1,...,a\"k. The bijection uses the Greene-Kleitman symmetric chain decomposition of the Boolean lattice.",
"",
"We define the notion of @math -free for locally restricted compositions, which means roughly that if such a composition contains a part @math and nearby parts are at least @math smaller, then @math can be replaced by any larger part. Two well-known examples are Carlitz and alternating compositions. We show that large parts have asymptotically geometric distributions. This leads to asymptotically independent Poisson variables for numbers of various large parts. Based on this we obtain asymptotic formulas for the probability of being gap free and for the expected values of the largest part and number distinct parts, all accurate to @math ."
]
} |
1606.07915 | 2473026520 | An S-restricted composition of a positive integer n is an ordered partition of n where each summand is drawn from a given subset S of positive integers. There are various problems regarding such compositions which have received attention in recent years. This paper is an attempt at finding a closed- form formula for the number of S-restricted compositions of n. To do so, we reduce the problem to finding solutions to corresponding so-called interpreters which are linear homogeneous recurrence relations with constant coefficients. Then, we reduce interpreters to Diophantine equations. Such equations are not in general solvable. Thus, we restrict our attention to those S-restricted composition problems whose interpreters have a small number of coefficients, thereby leading to solvable Diophantine equations. The formalism developed is then used to study the integer sequences related to some well-known cases of the S-restricted composition problem. | Another approach to the problem of @math -restricted compositions is to reduce it to some other known problems. For example, it has been shown that some cases of the problem are in bijective correspondence with some classes of restricted binary strings and pattern-avoiding permutations @cite_19 . It has also been shown that the number of @math -restricted compositions equals specific Fibonacci numbers for some specific sets @math @cite_43 . | {
"cite_N": [
"@cite_19",
"@cite_43"
],
"mid": [
"2182636777",
"1656327215"
],
"abstract": [
"We study several restricted classes of compositions by giving one-to-one maps between them and different classes of restricted binary strings or pattern avoiding permutations. Inspired by the ECO method [8], new succession rules for these classes are presented. Finally, we obtain generating algorithms in Constant Amortized Time (CAT) for theses classes.",
"A bijective proof is given for the following theorem: the number of compositions of n into odd parts equals the number of compositions of n + 1 into parts greater than one. Some commentary about the history of partitions and compositions is provided."
]
} |
1606.07461 | 2950606534 | Recurrent neural networks, and in particular long short-term memory (LSTM) networks, are a remarkably effective tool for sequence modeling that learn a dense black-box hidden representation of their sequential input. Researchers interested in better understanding these models have studied the changes in hidden state representations over time and noticed some interpretable patterns but also significant noise. In this work, we present LSTMVIS, a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows users to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with structural annotations from their domain. We show several use cases of the tool for analyzing specific hidden state properties on dataset containing nesting, phrase structure, and chord progressions, and demonstrate how the tool can be used to isolate patterns for further statistical analysis. We characterize the domain, the different stakeholders, and their goals and tasks. | Our core contribution, visualizing the state dynamics of RNNs in a structured way, is inspired by previous work on convolutional neural networks in vision applications @cite_20 @cite_13 . In linguistic tasks, visualizations have shown to be useful tool for understanding certain aspects of RNNs. @cite_15 use static visualization techniques to help understand hidden states in language models. Their work demonstrates that selected cells can model clear events such as open parentheses and the start of URLs. @cite_33 present additional techniques, particularly the use of gradient-based saliency to find important words. Their work also looks at several different models and datasets including text classification and auto-encoders. @cite_34 @cite_3 show that RNNs specifically learn lexical categories and grammatical functions that carry semantic information, partially by modifying the inputs fed to the model. While inspired by these techniques, our approach tries to extend beyond single examples and provide a general interactive visualization approach of the raw data for exploratory analysis. | {
"cite_N": [
"@cite_33",
"@cite_3",
"@cite_15",
"@cite_34",
"@cite_13",
"@cite_20"
],
"mid": [
"",
"2251503060",
"1951216520",
"2950290711",
"2952186574",
"2962851944"
],
"abstract": [
"",
"Recurrent neural networks (RNN) have gained a reputation for beating state-of-the-art results on many NLP benchmarks and for learning representations of words and larger linguistic units that encode complex syntactic and semantic structures. However, it is not straight-forward to understand how exactly these models make their decisions. Recently (2015) developed methods to provide linguistically motivated analysis for RNNs trained for sentiment analysis. Here we focus on the analysis of a multi-modal Gated Recurrent Neural Network (GRU) architecture trained to predict image-vectors extracted from images using a CNN trained on ImageNet from their corresponding descriptions. We propose two methods to explore the importance of grammatical categories with respect to the model and the task. We observe that the model pays most attention to head-words, noun subjects and adjectival modifiers and least to determiners and coordinations.",
"Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study.",
"We present novel methods for analyzing the activation patterns of RNNs from a linguistic point of view and explore the types of linguistic structure they learn. As a case study, we use a multi-task gated recurrent network architecture consisting of two parallel pathways with shared word embeddings trained on predicting the representations of the visual scene corresponding to an input sentence, and predicting the next word in the same sentence. Based on our proposed method to estimate the amount of contribution of individual tokens in the input to the final prediction of the networks we show that the image prediction pathway: a) is sensitive to the information structure of the sentence b) pays selective attention to lexical categories and grammatical functions that carry semantic information c) learns to treat the same input token differently depending on its grammatical functions in the sentence. In contrast the language model is comparatively more sensitive to words with a syntactic function. Furthermore, we propose methods to ex- plore the function of individual hidden units in RNNs and show that the two pathways of the architecture in our case study contain specialized units tuned to patterns informative for the task, some of which can carry activations to later time steps to encode long-term dependencies.",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13]."
]
} |
1606.07461 | 2950606534 | Recurrent neural networks, and in particular long short-term memory (LSTM) networks, are a remarkably effective tool for sequence modeling that learn a dense black-box hidden representation of their sequential input. Researchers interested in better understanding these models have studied the changes in hidden state representations over time and noticed some interpretable patterns but also significant noise. In this work, we present LSTMVIS, a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows users to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with structural annotations from their domain. We show several use cases of the tool for analyzing specific hidden state properties on dataset containing nesting, phrase structure, and chord progressions, and demonstrate how the tool can be used to isolate patterns for further statistical analysis. We characterize the domain, the different stakeholders, and their goals and tasks. | Recent work has also developed methods for extending RNNs for certain problems to make them easier to interpret (along with improving the models). One popular technique has been to use a neural mechanism to allow the model to focus in on a particular aspect of the input. @cite_4 use attention for soft alignment in machine translation. @cite_25 use attention to identify important aspects of an image for captioning, whereas @cite_31 use attention to find important aspects of a document for an extraction task. These approaches have the side benefit that they visualize the aspect of the model they are using. This approach differs from our work in that it requires changing the underlying model structure, whereas we attempt to interpret the hidden states of a fixed model directly. | {
"cite_N": [
"@cite_31",
"@cite_4",
"@cite_25"
],
"mid": [
"2949615363",
"2133564696",
"2950178297"
],
"abstract": [
"Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO."
]
} |
1606.07461 | 2950606534 | Recurrent neural networks, and in particular long short-term memory (LSTM) networks, are a remarkably effective tool for sequence modeling that learn a dense black-box hidden representation of their sequential input. Researchers interested in better understanding these models have studied the changes in hidden state representations over time and noticed some interpretable patterns but also significant noise. In this work, we present LSTMVIS, a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows users to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with structural annotations from their domain. We show several use cases of the tool for analyzing specific hidden state properties on dataset containing nesting, phrase structure, and chord progressions, and demonstrate how the tool can be used to isolate patterns for further statistical analysis. We characterize the domain, the different stakeholders, and their goals and tasks. | There has been some work on interactive visualization for interpreting machine learning models. @cite_23 present a visualization system for feed-forward neural networks with the goal of interpretation, and @cite_9 give a user-interface for tuning the learning itself. The Prospector system @cite_2 provides a general-purpose tool for practitioners to better understand their machine learning model and its predictions. | {
"cite_N": [
"@cite_9",
"@cite_23",
"@cite_2"
],
"mid": [
"",
"2149482703",
"2394669110"
],
"abstract": [
"",
"Artificial neural networks are computer software or hardware models inspired by the structure and behavior of neurons in the human nervous system. As a powerful learning tool, increasingly neural networks have been adopted by many large-scale information processing applications but there is no a set of well defined criteria for choosing a neural network. The user mostly treats a neural network as a black box and cannot explain how learning from input data was done nor how performance can be consistently ensured. We have experimented with several information visualization designs aiming to open the black box to possibly uncover underlying dependencies between the input data and the output data of a neural network. In this paper, we present our designs and show that the visualizations not only help us design more efficient neural networks, but also assist us in the process of using neural networks for problem solving such as performing a classification task.",
"Understanding predictive models, in terms of interpreting and identifying actionable insights, is a challenging task. Often the importance of a feature in a model is only a rough estimate condensed into one number. However, our research goes beyond these naive estimates through the design and implementation of an interactive visual analytics system, Prospector. By providing interactive partial dependence diagnostics, data scientists can understand how features affect the prediction overall. In addition, our support for localized inspection allows data scientists to understand how and why specific datapoints are predicted as they are, as well as support for tweaking feature values and seeing how the prediction responds. Our system is then evaluated using a case study involving a team of data scientists improving predictive models for detecting the onset of diabetes from electronic medical records."
]
} |
1606.07461 | 2950606534 | Recurrent neural networks, and in particular long short-term memory (LSTM) networks, are a remarkably effective tool for sequence modeling that learn a dense black-box hidden representation of their sequential input. Researchers interested in better understanding these models have studied the changes in hidden state representations over time and noticed some interpretable patterns but also significant noise. In this work, we present LSTMVIS, a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows users to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with structural annotations from their domain. We show several use cases of the tool for analyzing specific hidden state properties on dataset containing nesting, phrase structure, and chord progressions, and demonstrate how the tool can be used to isolate patterns for further statistical analysis. We characterize the domain, the different stakeholders, and their goals and tasks. | Recent work also describes systems that focus on analysis of hidden states for convolutional neural networks. @cite_24 utilize a DAG metaphor to show neurons, their connections, and learned features. @cite_18 use projections to explore relationships between neurons and learned observations. Other work has focused on user interfaces for constructing models, such as TensorBoard @cite_1 and the related playground for convolutional neural models at playground.tensorflow.org . Our work is most similar in spirit to the work by @cite_23 , @cite_24 , and @cite_18 , in that we are concerned with interpreting the hidden states of neural network models. However, our specific goals focus on RNNs and the needs of specific users, and the resulting visual design is significantly different. | {
"cite_N": [
"@cite_24",
"@cite_18",
"@cite_1",
"@cite_23"
],
"mid": [
"2343061342",
"",
"2271840356",
"2149482703"
],
"abstract": [
"Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.",
"",
"TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields, including speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery. This paper describes the TensorFlow interface and an implementation of that interface that we have built at Google. The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license in November, 2015 and are available at www.tensorflow.org.",
"Artificial neural networks are computer software or hardware models inspired by the structure and behavior of neurons in the human nervous system. As a powerful learning tool, increasingly neural networks have been adopted by many large-scale information processing applications but there is no a set of well defined criteria for choosing a neural network. The user mostly treats a neural network as a black box and cannot explain how learning from input data was done nor how performance can be consistently ensured. We have experimented with several information visualization designs aiming to open the black box to possibly uncover underlying dependencies between the input data and the output data of a neural network. In this paper, we present our designs and show that the visualizations not only help us design more efficient neural networks, but also assist us in the process of using neural networks for problem solving such as performing a classification task."
]
} |
1606.07695 | 2464737488 | Acoustic event detection for content analysis in most cases relies on lots of labeled data. However, manually annotating data is a time-consuming task, which thus makes few annotated resources available so far. Unlike audio event detection, automatic audio tagging, a multi-label acoustic event classification task, only relies on weakly labeled data. This is highly desirable to some practical applications using audio analysis. In this paper we propose to use a fully deep neural network (DNN) framework to handle the multi-label classification task in a regression way. Considering that only chunk-level rather than frame-level labels are available, the whole or almost whole frames of the chunk were fed into the DNN to perform a multi-label regression for the expected tags. The fully DNN, which is regarded as an encoding function, can well map the audio features sequence to a multi-tag vector. A deep pyramid structure was also designed to extract more robust high-level features related to the target tags. Further improved methods were adopted, such as the Dropout and background noise aware training, to enhance its generalization capability for new audio recordings in mismatched environments. Compared with the conventional Gaussian Mixture Model (GMM) and support vector machine (SVM) methods, the proposed fully DNN-based method could well utilize the long-term temporal information with the whole chunk as the input. The results show that our approach obtained a 15 relative improvement compared with the official GMM-based method of DCASE 2016 challenge. | Multiple instance learning is described in terms of bags @math . The @math th instance in the @math th bag, @math , is defined as @math where @math , and @math is the number of instances in @math . @math 's label is @math . If @math , then @math for all @math . If @math , then at least one instance @math is a positive example of the underlying concept @cite_26 . | {
"cite_N": [
"@cite_26"
],
"mid": [
"2108745803"
],
"abstract": [
"This paper presents two new formulations of multiple-instance learning as a maximum margin problem. The proposed extensions of the Support Vector Machine (SVM) learning approach lead to mixed integer quadratic programs that can be solved heuristic ally. Our generalization of SVMs makes a state-of-the-art classification technique, including non-linear classification via kernels, available to an area that up to now has been largely dominated by special purpose methods. We present experimental results on a pharmaceutical data set and on applications in automated image indexing and document categorization."
]
} |
1606.07211 | 2467312824 | This paper tackles the problem of the semantic gap between a document and a query within an ad-hoc information retrieval task. In this context, knowledge bases (KBs) have already been acknowledged as valuable means since they allow the representation of explicit relations between entities. However, they do not necessarily represent implicit relations that could be hidden in a corpora. This latter issue is tackled by recent works dealing with deep representation learn ing of texts. With this in mind, we argue that embedding KBs within deep neural architectures supporting documentquery matching would give rise to fine-grained latent representations of both words and their semantic relations. In this paper, we review the main approaches of neural-based document ranking as well as those approaches for latent representation of entities and relations via KBs. We then propose some avenues to incorporate KBs in deep neural approaches for document ranking. More particularly, this paper advocates that KBs can be used either to support enhanced latent representations of queries and documents based on both distributional and relational semantics or to serve as a semantic translator between their latent distributional representations. | Recently, many works have shown that deep learning approaches are highly efficient in several IR tasks (e.g., text matching @cite_16 @cite_0 , query reformulation @cite_3 , or question-answering @cite_12 ). More close to our work, we consider in this paper the specific task of text matching and the use of deep neural networks for document ranking. Indeed, deep architectures have been highlighted as effective in the discovery of hidden structures underlying plain text modeled through latent semantic features. | {
"cite_N": [
"@cite_0",
"@cite_16",
"@cite_12",
"@cite_3"
],
"mid": [
"2136189984",
"100623710",
"2951622387",
"2143196462"
],
"abstract": [
"Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper.",
"A central goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on several methods to speed-up both training and probability computation, as well as comparative experiments to evaluate the improvements brought by these techniques. We finally describe the incorporation of this new language model into a state-of-the-art speech recognizer of conversational speech.",
"This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few hand-crafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers. Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a competitive benchmark of the literature.",
"Search logs contain examples of frequently occurring patterns of user reformulations of queries. Intuitively, the reformulation \"San Francisco\" -- \"San Francisco 49ers\" is semantically similar to \"Detroit\" -- \"Detroit Lions\". Likewise, \"London\" -- \"things to do in London\" and \"New York\" -- \"New York tourist attractions\" can also be considered similar transitions in intent. The reformulation \"movies\" -- \"new movies\" and \"york\" -- \"New York\", however, are clearly different despite the lexical similarities in the two reformulations. In this paper, we study the distributed representation of queries learnt by deep neural network models, such as the Convolutional Latent Semantic Model, and show that they can be used to represent query reformulations as vectors. These reformulation vectors exhibit favourable properties such as mapping semantically and syntactically similar query changes closer in the embedding space. Our work is motivated by the success of continuous space language models in capturing relationships between words and their meanings using offset vectors. We demonstrate a way to extend the same intuition to represent query reformulations. Furthermore, we show that the distributed representations of queries and reformulations are both useful for modelling session context for query prediction tasks, such as for query auto-completion (QAC) ranking. Our empirical study demonstrates that short-term (session) history context features based on these two representations improves the mean reciprocal rank (MRR) for the QAC ranking task by more than 10 over a supervised ranker baseline. Our results also show that by using features based on both these representations together we achieve a better performance, than either of them individually."
]
} |
1606.07211 | 2467312824 | This paper tackles the problem of the semantic gap between a document and a query within an ad-hoc information retrieval task. In this context, knowledge bases (KBs) have already been acknowledged as valuable means since they allow the representation of explicit relations between entities. However, they do not necessarily represent implicit relations that could be hidden in a corpora. This latter issue is tackled by recent works dealing with deep representation learn ing of texts. With this in mind, we argue that embedding KBs within deep neural architectures supporting documentquery matching would give rise to fine-grained latent representations of both words and their semantic relations. In this paper, we review the main approaches of neural-based document ranking as well as those approaches for latent representation of entities and relations via KBs. We then propose some avenues to incorporate KBs in deep neural approaches for document ranking. More particularly, this paper advocates that KBs can be used either to support enhanced latent representations of queries and documents based on both distributional and relational semantics or to serve as a semantic translator between their latent distributional representations. | We distinguish between two types of neural IR models according to the learning and leveraging approaches of distributed representations of text. The first category of work uses distributed representations to exploit text dependence within a well-known IR model, such as language models @cite_16 @cite_15 . Also, Mitra @cite_5 has recently proposed a model that leverages the dual word embeddings to better measure the document-query relevance. By keeping both input and output projections of word2vec @cite_9 , this allows to leverage both the embedding spaces to acquire richer distributional relationships. The author has demonstrated that this model is able to better gauge the document with respect to the query. | {
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_9"
],
"mid": [
"2260194779",
"2949547296",
"100623710",
"1614298861"
],
"abstract": [
"A fundamental goal of search engines is to identify, given a query, documents that have relevant text. This is intrinsically difficult because the query and the document may use different vocabulary, or the document may contain query words without being relevant. We investigate neural word embeddings as a source of evidence in document ranking. We train a word2vec embedding model on a large unlabelled query corpus, but in contrast to how the model is commonly used, we retain both the input and the output projections, allowing us to leverage both the embedding spaces to derive richer distributional relationships. During ranking we map the query words into the input space and the document words into the output space, and compute a query-document relevance score by aggregating the cosine similarities across all the query-document word pairs. We postulate that the proposed Dual Embedding Space Model (DESM) captures evidence on whether a document is about a query term in addition to what is modelled by traditional term-frequency based approaches. Our experiments show that the DESM can re-rank top documents returned by a commercial Web search engine, like Bing, better than a term-matching based signal like TF-IDF. However, when ranking a larger set of candidate documents, we find the embeddings-based approach is prone to false positives, retrieving documents that are only loosely related to the query. We demonstrate that this problem can be solved effectively by ranking based on a linear mixture of the DESM and the word counting features.",
"Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.",
"A central goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on several methods to speed-up both training and probability computation, as well as comparative experiments to evaluate the improvements brought by these techniques. We finally describe the incorporation of this new language model into a state-of-the-art speech recognizer of conversational speech.",
""
]
} |
1606.07211 | 2467312824 | This paper tackles the problem of the semantic gap between a document and a query within an ad-hoc information retrieval task. In this context, knowledge bases (KBs) have already been acknowledged as valuable means since they allow the representation of explicit relations between entities. However, they do not necessarily represent implicit relations that could be hidden in a corpora. This latter issue is tackled by recent works dealing with deep representation learn ing of texts. With this in mind, we argue that embedding KBs within deep neural architectures supporting documentquery matching would give rise to fine-grained latent representations of both words and their semantic relations. In this paper, we review the main approaches of neural-based document ranking as well as those approaches for latent representation of entities and relations via KBs. We then propose some avenues to incorporate KBs in deep neural approaches for document ranking. More particularly, this paper advocates that KBs can be used either to support enhanced latent representations of queries and documents based on both distributional and relational semantics or to serve as a semantic translator between their latent distributional representations. | As an extension of the DSSM proposed in @cite_0 , @cite_8 propose to consider word-trigram vectors enhanced by a word hashing layer (instead of word hashing on the basis of bag-of-words) to capture the fine-grained contextual structures in the query document. Accordingly, the end-to-end scoring model is impacted, leading to a convolutional-pooling structure, called Convolutional Latent Semantic Model (CLSM). | {
"cite_N": [
"@cite_0",
"@cite_8"
],
"mid": [
"2136189984",
"2131876387"
],
"abstract": [
"Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper.",
"In this paper, we propose a new latent semantic model that incorporates a convolutional-pooling structure over word sequences to learn low-dimensional, semantic vector representations for search queries and Web documents. In order to capture the rich contextual structures in a query or a document, we start with each word within a temporal context window in a word sequence to directly capture contextual features at the word n-gram level. Next, the salient word n-gram features in the word sequence are discovered by the model and are then aggregated to form a sentence-level feature vector. Finally, a non-linear transformation is applied to extract high-level semantic information to generate a continuous vector representation for the full text string. The proposed convolutional latent semantic model (CLSM) is trained on clickthrough data and is evaluated on a Web document ranking task using a large-scale, real-world data set. Results show that the proposed model effectively captures salient semantic information in queries and documents for the task while significantly outperforming previous state-of-the-art semantic models."
]
} |
1606.07211 | 2467312824 | This paper tackles the problem of the semantic gap between a document and a query within an ad-hoc information retrieval task. In this context, knowledge bases (KBs) have already been acknowledged as valuable means since they allow the representation of explicit relations between entities. However, they do not necessarily represent implicit relations that could be hidden in a corpora. This latter issue is tackled by recent works dealing with deep representation learn ing of texts. With this in mind, we argue that embedding KBs within deep neural architectures supporting documentquery matching would give rise to fine-grained latent representations of both words and their semantic relations. In this paper, we review the main approaches of neural-based document ranking as well as those approaches for latent representation of entities and relations via KBs. We then propose some avenues to incorporate KBs in deep neural approaches for document ranking. More particularly, this paper advocates that KBs can be used either to support enhanced latent representations of queries and documents based on both distributional and relational semantics or to serve as a semantic translator between their latent distributional representations. | In the same mind, Severyn and Moschitti @cite_1 present another convolutional neural network architecture to learn the optimal representation of short text pairs as well as the similarity function. Given a pair of sentences modeled as a matrix of pre-trained word embeddings, this model first learns their intermediate feature representation by applying convolution-pooling layers on each sentence. A similarity score of this intermediate representation of the document and the query is computed and enhanced then by additional features (e.g., query-document word IDF overlap). This richer representation is plugged into a fully connected layer that classifies whether or not the document is similar to the query. Another convolutional architecture model for matching two sentences is proposed in @cite_18 . Instead of relying on theirs semantic vectors, the authors use a deep architecture with multiple convolutional layers to model an interaction between plain texts (i.e. the co-occurence pattern of words across two texts). The proposed model allows to represent the hierarchical structures of sentences and to capture the rich matching patterns at different levels. | {
"cite_N": [
"@cite_18",
"@cite_1"
],
"mid": [
"2951359136",
"1966443646"
],
"abstract": [
"Semantic matching is of central importance to many natural language tasks bordes2014semantic,RetrievalQA . A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models.",
"Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3 absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers."
]
} |
1606.07211 | 2467312824 | This paper tackles the problem of the semantic gap between a document and a query within an ad-hoc information retrieval task. In this context, knowledge bases (KBs) have already been acknowledged as valuable means since they allow the representation of explicit relations between entities. However, they do not necessarily represent implicit relations that could be hidden in a corpora. This latter issue is tackled by recent works dealing with deep representation learn ing of texts. With this in mind, we argue that embedding KBs within deep neural architectures supporting documentquery matching would give rise to fine-grained latent representations of both words and their semantic relations. In this paper, we review the main approaches of neural-based document ranking as well as those approaches for latent representation of entities and relations via KBs. We then propose some avenues to incorporate KBs in deep neural approaches for document ranking. More particularly, this paper advocates that KBs can be used either to support enhanced latent representations of queries and documents based on both distributional and relational semantics or to serve as a semantic translator between their latent distributional representations. | The potential of semantic representations of words learned through a neural approach has been introduced in @cite_9 @cite_2 , opening several perspectives in natural language processing and IR tasks. Beyond words, several works focused on the representation of sentences @cite_10 , documents @cite_15 , and also knowledge bases (KBs) @cite_11 @cite_7 . Within the latter work focusing on KBs, the goal is to exploit concepts and their relationships to obtain a latent representation of the KB. While some work focused on the representation of relations on the basis of triplets belonging to the KB @cite_11 , other work proposed to enhance the distributed representation of words for representing their underlying concepts by taking into consideration the structure of the KB graph (e.g., concepts in the same category or their relationships with other concepts) @cite_4 @cite_7 @cite_6 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_6",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"",
"2125076245",
"1614298861",
"2250930514",
"2250539671",
"2949547296",
"2950133940",
"2127795553"
],
"abstract": [
"",
"Representing words into vectors in continuous space can form up a potentially powerful basis to generate high-quality textual features for many text mining and natural language processing tasks. Some recent efforts, such as the skip-gram model, have attempted to learn word representations that can capture both syntactic and semantic information among text corpus. However, they still lack the capability of encoding the properties of words and the complex relationships among words very well, since text itself often contains incomplete and ambiguous information. Fortunately, knowledge graphs provide a golden mine for enhancing the quality of learned word representations. In particular, a knowledge graph, usually composed by entities (words, phrases, etc.), relations between entities, and some corresponding meta information, can supply invaluable relational knowledge that encodes the relationship between entities as well as categorical knowledge that encodes the attributes or properties of entities. Hence, in this paper, we introduce a novel framework called RC-NET to leverage both the relational and categorical knowledge to produce word representations of higher quality. Specifically, we build the relational knowledge and the categorical knowledge into two separate regularization functions, and combine both of them with the original objective function of the skip-gram model. By solving this combined optimization problem using back propagation neural networks, we can obtain word representations enhanced by the knowledge graph. Experiments on popular text mining and natural language processing tasks, including analogical reasoning, word similarity, and topic prediction, have all demonstrated that our model can significantly improve the quality of word representations.",
"",
"Word embeddings learned on unlabeled data are a popular tool in semantics, but may not capture the desired semantics. We propose a new learning objective that incorporates both a neural language model objective (, 2013) and prior knowledge from semantic resources to learn improved lexical semantic embeddings. We demonstrate that our embeddings improve over those learned solely on raw text in three settings: language modeling, measuring semantic similarity, and predicting human judgements.",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples."
]
} |
1606.07211 | 2467312824 | This paper tackles the problem of the semantic gap between a document and a query within an ad-hoc information retrieval task. In this context, knowledge bases (KBs) have already been acknowledged as valuable means since they allow the representation of explicit relations between entities. However, they do not necessarily represent implicit relations that could be hidden in a corpora. This latter issue is tackled by recent works dealing with deep representation learn ing of texts. With this in mind, we argue that embedding KBs within deep neural architectures supporting documentquery matching would give rise to fine-grained latent representations of both words and their semantic relations. In this paper, we review the main approaches of neural-based document ranking as well as those approaches for latent representation of entities and relations via KBs. We then propose some avenues to incorporate KBs in deep neural approaches for document ranking. More particularly, this paper advocates that KBs can be used either to support enhanced latent representations of queries and documents based on both distributional and relational semantics or to serve as a semantic translator between their latent distributional representations. | In addition to word similarity tasks, the literature review shows that KBs are also exploited in question-answering tasks. For instance, @cite_12 exploit a KB to learn the latent representations of questions and candidate answers. The latter is modeled as a subgraph built by a sophisticated inference procedure that captures the relationship of the question object with the candidate answer as well as its related entities. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2951622387"
],
"abstract": [
"This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few hand-crafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers. Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a competitive benchmark of the literature."
]
} |
1606.07189 | 2022580522 | As one of the leading platforms for creative content, Tumblr offers advertisers a unique way of creating brand identity. Advertisers can tell their story through images, animation, text, music, video, and more, and can promote that content by sponsoring it to appear as an advertisement in the users' live feeds. In this paper, we present a framework that enabled two of the key targeted advertising components for Tumblr, gender and interest targeting. We describe the main challenges encountered during the development of the framework, which include the creation of a ground truth for training gender prediction models, as well as mapping Tumblr content to a predefined interest taxonomy. For purposes of inferring user interests, we propose a novel semi-supervised neural language model for categorization of Tumblr content (i.e., post tags and post keywords). The model was trained on a large-scale data set consisting of @math billion user posts, with a very limited amount of categorized keywords, and was shown to have superior performance over the baseline approaches. We successfully deployed gender and interest targeting capability in Yahoo production systems, delivering inference for users that covers more than 90 of daily activities on Tumblr. Online performance results indicate advantages of the proposed approach, where we observed 20 increase in user engagement with sponsored posts in comparison to untargeted campaigns. | We consider personalization through the domain of ad targeting @cite_16 , where the task is to find best matching ads to be displayed for each individual user. This improves user's online experience (as only relevant and interesting ads are shown) and can lead to increased revenue for the advertisers (as users are more likely to click on the ad and make a purchase). Due to its large impact and many open research questions, targeted advertising has garnered significant interest from machine learning community, as witnessed by a large number of recent workshops www.targetad-workshop.net . and publications @cite_12 @cite_7 . | {
"cite_N": [
"@cite_16",
"@cite_7",
"@cite_12"
],
"mid": [
"",
"1919633771",
"2096963972"
],
"abstract": [
"",
"Online service platforms (OSPs), such as search engines, news-websites, ad-providers, etc., serve highly personalized content to the user, based on the profile extracted from her history with the OSP. In this paper, we capture OSP's personalization for an user in a new data structure called the personalization vector (?), which is a weighted vector over a set of topics, and present efficient algorithms to learn it. Our approach treats OSPs as black-boxes, and extracts η by mining only their output, specifically, the personalized (for an user) and vanilla (without any user information) contents served, and the differences in these content. We believe that such treatment of OSPs is a unique aspect of our work, not just enabling access to (so far hidden) profiles in OSPs, but also providing a novel and practical approach for retrieving information from OSPs by mining differences in their outputs. We formulate a new model called Latent Topic Personalization (LTP) that captures the personalization vector in a learning framework and present efficient inference algorithms for determining it. We perform extensive experiments targeting search engine personalization, using data from both real Google users and synthetic setup. Our results indicate that LTP achieves high accuracy (R-pre = 84 ) in discovering personalized topics.For Google data, our qualitative results demonstrate that the topics determined by LTP for a user correspond well to his ad-categories determined by Google.",
"Computational advertising is an emerging scientific discipline, at the intersection of large scale search and text analysis, information retrieval, statistical modeling, machine learning, optimization, and microeconomics. The central challenge of computational advertising is to find the \"best match\" between a given user in a given context and a suitable advertisement. The context could be a user entering a query in a search engine (\"sponsored search\"), a user reading a web page (\"content match\" and \"display ads\"), a user conversing on a cell phone (\"mobile advertising\"), and so on. The information about the user can vary from scarily detailed to practically nil. The number of potential advertisements might be in the billions. Thus, depending on the definition of \"best match\" this challenge leads to a variety of massive optimization and search problems, with complicated constraints. The main part of this talk will give an introduction to computational advertising and present some illustrative research. In the second part we will discuss connections to recommender systems and present a couple of open problems of potential interest to both communities."
]
} |
1606.07189 | 2022580522 | As one of the leading platforms for creative content, Tumblr offers advertisers a unique way of creating brand identity. Advertisers can tell their story through images, animation, text, music, video, and more, and can promote that content by sponsoring it to appear as an advertisement in the users' live feeds. In this paper, we present a framework that enabled two of the key targeted advertising components for Tumblr, gender and interest targeting. We describe the main challenges encountered during the development of the framework, which include the creation of a ground truth for training gender prediction models, as well as mapping Tumblr content to a predefined interest taxonomy. For purposes of inferring user interests, we propose a novel semi-supervised neural language model for categorization of Tumblr content (i.e., post tags and post keywords). The model was trained on a large-scale data set consisting of @math billion user posts, with a very limited amount of categorized keywords, and was shown to have superior performance over the baseline approaches. We successfully deployed gender and interest targeting capability in Yahoo production systems, delivering inference for users that covers more than 90 of daily activities on Tumblr. Online performance results indicate advantages of the proposed approach, where we observed 20 increase in user engagement with sponsored posts in comparison to untargeted campaigns. | One of the basic approaches in ad targeting is to target users with ads based on their demographics, such as age or gender. Historically, this approach has proven to work better than targeting random users. However, while for some products this type of targeting may be sufficient (e.g., women's makeup, women's clothing, man's razors, man's clothing), for others it is not effective enough and more involved profiling of users is required. A popular method in today's ad targeting that addresses this issue is known as interest targeting, where ads are assigned categories, such as sports'' or travel'', and machine learning models are trained to predict user interest in each of these categories using historical user behavior @cite_17 @cite_11 @cite_15 . Typically, a taxonomy is used to decide on the targeting categories, and for each ad category a separate predictive model is trained, able to estimate the probability of an ad click. Then, the models are evaluated on the entire user population, with @math users with the highest score selected for ad exposure. In this work we take this approach to develop a ad targeting platform on Tumblr. | {
"cite_N": [
"@cite_15",
"@cite_11",
"@cite_17"
],
"mid": [
"1990590523",
"1997305783",
"2142534468"
],
"abstract": [
"Web applications often rely on user profiles of observed user actions, such as queries issued, page views, etc. In audience selection for display advertising, the audience that is likely to be responsive to a given ad campaign is identified via such profiles. We formalize the audience selection problem as a ranked retrieval task over an index of known users. We focus on the common case of audience selection where a small seed set of users who have previously responded positively to the campaign is used to identify a broader target audience. The actions of the users in the seed set are aggregated to construct a query, the query is then executed against an index of other user profiles to retrieve the highest scoring profiles. We validate our approach on a real-world dataset, demonstrating the trade-offs of different user and query models and that our approach is particularly robust for small campaigns. The proposed user modeling framework is applicable to many other applications requiring user profiles such as content suggestion and personalization.",
"Understanding what interests and delights users is critical to effective behavioral targeting, especially in information-poor contexts. As users interact with content and advertising, their passive behavior can reveal their interests towards advertising. Two issues are critical for building effective targeting methods: what metric to optimize for and how to optimize. More specifically, we first attempt to understand what the learning objective should be for behavioral targeting so as to maximize advertiser's performance. While most popular advertising methods optimize for user clicks, as we will show, maximizing clicks does not necessarily imply maximizing purchase activities or transactions, called conversions, which directly translate to advertiser's revenue. In this work we focus on conversions which makes a more relevant metric but also the more challenging one. Second is the issue of how to represent and combine the plethora of user activities such as search queries, page views, ad clicks to perform the targeting. We investigate several sources of user activities as well as methods for inferring conversion likelihood given the activities. We also explore the role played by the temporal aspect of user activities for targeting, e.g., how recent activities compare to the old ones. Based on a rigorous offline empirical evaluation over 200 individual advertising campaigns, we arrive at what we believe are best practices for behavioral targeting. We deploy our approach over live user traffic to demonstrate its superiority over existing state-of-the-art targeting methods.",
"Historical user activity is key for building user profiles to predict the user behavior and affinities in many web applications such as targeting of online advertising, content personalization and social recommendations. User profiles are temporal, and changes in a user's activity patterns are particularly useful for improved prediction and recommendation. For instance, an increased interest in car-related web pages may well suggest that the user might be shopping for a new vehicle.In this paper we present a comprehensive statistical framework for user profiling based on topic models which is able to capture such effects in a fully fashion. Our method models topical interests of a user dynamically where both the user association with the topics and the topics themselves are allowed to vary over time, thus ensuring that the profiles remain current. We describe a streaming, distributed inference algorithm which is able to handle tens of millions of users. Our results show that our model contributes towards improved behavioral targeting of display advertising relative to baseline models that do not incorporate topical and or temporal dependencies. As a side-effect our model yields human-understandable results which can be used in an intuitive fashion by advertisers."
]
} |
1606.07189 | 2022580522 | As one of the leading platforms for creative content, Tumblr offers advertisers a unique way of creating brand identity. Advertisers can tell their story through images, animation, text, music, video, and more, and can promote that content by sponsoring it to appear as an advertisement in the users' live feeds. In this paper, we present a framework that enabled two of the key targeted advertising components for Tumblr, gender and interest targeting. We describe the main challenges encountered during the development of the framework, which include the creation of a ground truth for training gender prediction models, as well as mapping Tumblr content to a predefined interest taxonomy. For purposes of inferring user interests, we propose a novel semi-supervised neural language model for categorization of Tumblr content (i.e., post tags and post keywords). The model was trained on a large-scale data set consisting of @math billion user posts, with a very limited amount of categorized keywords, and was shown to have superior performance over the baseline approaches. We successfully deployed gender and interest targeting capability in Yahoo production systems, delivering inference for users that covers more than 90 of daily activities on Tumblr. Online performance results indicate advantages of the proposed approach, where we observed 20 increase in user engagement with sponsored posts in comparison to untargeted campaigns. | To the best of our knowledge, the Tumblr social network is considered by only a few scientific studies. In @cite_13 @cite_4 the authors discuss the problem of blog recommendation, while in @cite_14 explore Tumblr social norms. However, our work is the first paper that addresses ad targeting at Tumblr. | {
"cite_N": [
"@cite_14",
"@cite_13",
"@cite_4"
],
"mid": [
"",
"2067825627",
"2398028996"
],
"abstract": [
"",
"User recommender systems are a key component in any on-line social networking platform: they help the users growing their network faster, thus driving engagement and loyalty. In this paper we study link prediction with explanations for user recommendation in social networks. For this problem we propose WTFW (\"Who to Follow and Why\"), a stochastic topic model for link prediction over directed and nodes-attributed graphs. Our model not only predicts links, but for each predicted link it decides whether it is a \"topical\" or a \"social\" link, and depending on this decision it produces a different type of explanation. A topical link is recommended between a user interested in a topic and a user authoritative in that topic: the explanation in this case is a set of binary features describing the topic responsible of the link creation. A social link is recommended between users which share a large social neighborhood: in this case the explanation is the set of neighbors which are more likely to be responsible for the link creation. Our experimental assessment on real-world data confirms the accuracy of WTFW in the link prediction and the quality of the associated explanations.",
"In microblogging sites, recommending blogs (users) to follow is one of the core tasks for enhancing user experience. In this paper, we propose a novel inductive matrix completion based blog recommendation method to effectively utilize multiple rich sources of evidence such as the social network and the content as well as the activity data from users and blogs. Experiments on a large-scale real-world dataset from Tumblr show the effectiveness of the proposed blog recommendation method."
]
} |
1606.07395 | 2470823798 | Tropical geometry and its applications indicate a "theory of syzygies" over polytope semirings. Taking cue from this indication, we study a notion of syzygies over the polytope semiring. We begin our exploration with the concept of Newton basis, an analogue of Gr "obner basis that captures the image of an ideal under the Newton polytope map. The image @math of a graded ideal @math under the Newton polytope is a graded sub-semimodule of the polytope semiring. Analogous to the Hilbert series, we define the notion of Newton-Hilbert series that encodes the rank of each graded piece of @math . We prove the rationality of the Newton-Hilbert series for sub-semimodules that satisfy a property analogous to Cohen-Macaulayness. We define notions of regular sequence of polytopes and syzygies of polytopes. We show an analogue of the Koszul property characterizing the syzygies of a regular sequence of polytopes. | Polytope semirings (under the name polytope algebra) appeared in the book of Pachter and Sturmfels [Chapter 2] PacStu05 in the context of computational biology. Semiring theory, in particular idempotent semiring theory, has been treated in several books and articles, see for example @cite_12 . Linear algebra over semirings has been the focus of these works. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2143658255"
],
"abstract": [
"Preface. 1. Hemirings and semirings: definitions and examples. 2. Sets and relations with values in a semiring. 3. Building new semirings from old. 4. Some conditions on semirings. 5. Complemented elements in semirings. 6. Ideals in semirings. 7. Prime and semiprime ideals in semirings. 8. Factor semirings. 9. Morphisms of semirings. 10. Kernels of morphisms. 11. Semirings of fractions. 12. Euclidean semirings. 13. Additively-regular semirings. 14. Semimodules over semirings. 15. Factor semimodules. 16. Some constructions for semimodules. 17. Free, projective, and injective semimodules. 18. Localization of semimodules. 19. Linear algebra over a semiring. 20. Partially-ordered semirings. 21. Lattice-ordered semirings. 22. Complete semirings. 23. Complete semimodules. 24. CLO-semirings. 25. Fixed points of affine maps. References. Index of applications. Index of terminology."
]
} |
1606.07395 | 2470823798 | Tropical geometry and its applications indicate a "theory of syzygies" over polytope semirings. Taking cue from this indication, we study a notion of syzygies over the polytope semiring. We begin our exploration with the concept of Newton basis, an analogue of Gr "obner basis that captures the image of an ideal under the Newton polytope map. The image @math of a graded ideal @math under the Newton polytope is a graded sub-semimodule of the polytope semiring. Analogous to the Hilbert series, we define the notion of Newton-Hilbert series that encodes the rank of each graded piece of @math . We prove the rationality of the Newton-Hilbert series for sub-semimodules that satisfy a property analogous to Cohen-Macaulayness. We define notions of regular sequence of polytopes and syzygies of polytopes. We show an analogue of the Koszul property characterizing the syzygies of a regular sequence of polytopes. | Recent works of MacPherson @cite_4 , @cite_7 introduce an analogue of integral closure over idempotent semirings and a notion of projective modules over polyhedral semirings. Our work is another step towards commutative algebra over semirings. Our results are over polytope semirings and do not seem to directly extend to arbitrary semirings. This is primarily because of our use of the Lebseque measure on the set of polytopes. Other contexts where polytope semirings appear are Litvinov @cite_14 , Connes [Page 19] Con15 . Litvinov @cite_14 emphasizes a correspondence principle between classical analysis and idempotent analysis. According to this principle, every result in classical analysis has an idempotent analogue. Theorem and Theorem confirm a similar interplay between commutative rings and polytope semirings. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7"
],
"mid": [
"1496147208",
"2241210168",
"1496667393"
],
"abstract": [
"A brief introduction to tropical and idempotent mathematics (with an emphasys on idempotent functional analysis) is presented. Applications to classical mechanics and geometry are especially examined.",
"I classify projective modules over idempotent semirings that are free on a monoid. The analysis extends to the case of the semiring of convex, piecewise-affine functions on a polyhedron, for which projective modules correspond to convex families of weight polyhedra for the general linear group.",
"I introduce the concept of integral closure for elements and ideals in idempotent semirings, and establish how it corresponds to its namesake in commutative algebra. In the case of free semirings, integral closure can be understood in terms of a certain monoid of convex bodies under Minkowski sum. I argue that integral closure therefore accounts for the difference between tropical functions' and real functions."
]
} |
1606.06854 | 2466381304 | Previous learning based hand pose estimation methods does not fully exploit the prior information in hand model geometry. Instead, they usually rely a separate model fitting step to generate valid hand poses. Such a post processing is inconvenient and sub-optimal. In this work, we propose a model based deep learning approach that adopts a forward kinematics based layer to ensure the geometric validity of estimated poses. For the first time, we show that embedding such a non-linear generative process in deep learning is feasible for hand pose estimation. Our approach is verified on challenging public datasets and achieves state-of-the-art performance. | A good review of earlier hand pose estimation work is in @cite_5 . @cite_23 provides an extensive analysis of recent depth based methods and datasets. Here we focus on the hybrid discriminative and generative approaches that are more related to our work. We also discuss other approaches that formulate handcraft operations into differentiable components. | {
"cite_N": [
"@cite_5",
"@cite_23"
],
"mid": [
"2137940226",
"2951798378"
],
"abstract": [
"Direct use of the hand as an input device is an attractive method for providing natural human-computer interaction (HCI). Currently, the only technology that satisfies the advanced requirements of hand-based input for HCI is glove-based sensing. This technology, however, has several drawbacks including that it hinders the ease and naturalness with which the user can interact with the computer-controlled environment, and it requires long calibration and setup procedures. Computer vision (CV) has the potential to provide more natural, non-contact solutions. As a result, there have been considerable research efforts to use the hand as an input device for HCI. In particular, two types of research directions have emerged. One is based on gesture classification and aims to extract high-level abstract information corresponding to motion patterns or postures of the hand. The second is based on pose estimation systems and aims to capture the real 3D motion of the hand. This paper presents a literature review on the latter research direction, which is a very challenging problem in the context of HCI.",
"Hand pose estimation has matured rapidly in recent years. The introduction of commodity depth sensors and a multitude of practical applications have spurred new advances. We provide an extensive analysis of the state-of-the-art, focusing on hand pose estimation from a single depth frame. To do so, we have implemented a considerable number of systems, and will release all software and evaluation code. We summarize important conclusions here: (1) Pose estimation appears roughly solved for scenes with isolated hands. However, methods still struggle to analyze cluttered scenes where hands may be interacting with nearby objects and surfaces. To spur further progress we introduce a challenging new dataset with diverse, cluttered scenes. (2) Many methods evaluate themselves with disparate criteria, making comparisons difficult. We define a consistent evaluation criteria, rigorously motivated by human experiments. (3) We introduce a simple nearest-neighbor baseline that outperforms most existing systems. This implies that most systems do not generalize beyond their training sets. This also reinforces the under-appreciated point that training data is as important as the model itself. We conclude with directions for future progress."
]
} |
1606.06854 | 2466381304 | Previous learning based hand pose estimation methods does not fully exploit the prior information in hand model geometry. Instead, they usually rely a separate model fitting step to generate valid hand poses. Such a post processing is inconvenient and sub-optimal. In this work, we propose a model based deep learning approach that adopts a forward kinematics based layer to ensure the geometric validity of estimated poses. For the first time, we show that embedding such a non-linear generative process in deep learning is feasible for hand pose estimation. Our approach is verified on challenging public datasets and achieves state-of-the-art performance. | Many works use discriminative methods for initialization and generative methods for refinement. @cite_19 predicts joint locations with a convolutional neural network. The joints are converted to a hand skeleton using an Inverse Kinematics(IK) process. @cite_6 uses a pixel classification random forest to provide a coarse prediction of joints. Thus a more detailed similarity function can be applied to the following model fitting step by directly comparing the generated joint locations to the predicted joint locations. Similarly, @cite_22 firstly uses a random regression forest to estimate the joint distribution, and then builds a more reliable quality measurement scheme based on the consistency between generated joint locations and the predicted distribution. All these approaches separate the joint estimation and model fitting in two stages. Recently, @cite_15 trains a feedback loop for hand pose estimation using three neutral networks. It combines a generative network, a discriminative pose estimation network and a pose update network. The training is complex. Our method differs from above methods in that it uses a single network and seamlessly integrates the model generation process with a new layer. The training is simple and results are good. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_22",
"@cite_6"
],
"mid": [
"2075156252",
"2210697964",
"2272518632",
""
],
"abstract": [
"We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model.",
"We propose an entirely data-driven approach to estimating the 3D pose of a hand given a depth image. We show that we can correct the mistakes made by a Convolutional Neural Network trained to predict an estimate of the 3D pose by using a feedback loop. The components of this feedback loop are also Deep Networks, optimized using training data. They remove the need for fitting a 3D model to the input data, which requires both a carefully designed fitting function and algorithm. We show that our approach outperforms state-of-the-art methods, and is efficient as our implementation runs at over 400 fps on a single GPU.",
"Model-based approaches to 3D hand tracking have been shown to perform well in a wide range of scenarios. However, they require initialisation and cannot recover easily from tracking failures that occur due to fast hand motions. Data-driven approaches, on the other hand, can quickly deliver a solution, but the results often suffer from lower accuracy or missing anatomical validity compared to those obtained from model-based approaches. In this work we propose a hybrid approach for hand pose estimation from a single depth image. First, a learned regressor is employed to deliver multiple initial hypotheses for the 3D position of each hand joint. Subsequently, the kinematic parameters of a 3D hand model are found by deliberately exploiting the inherent uncertainty of the inferred joint proposals. This way, the method provides anatomically valid and accurate solutions without requiring manual initialisation or suffering from track losses. Quantitative results on several standard datasets demonstrate that the proposed method outperforms state-of-the-art representatives of the model-based, data-driven and hybrid paradigms.",
""
]
} |
1606.06854 | 2466381304 | Previous learning based hand pose estimation methods does not fully exploit the prior information in hand model geometry. Instead, they usually rely a separate model fitting step to generate valid hand poses. Such a post processing is inconvenient and sub-optimal. In this work, we propose a model based deep learning approach that adopts a forward kinematics based layer to ensure the geometric validity of estimated poses. For the first time, we show that embedding such a non-linear generative process in deep learning is feasible for hand pose estimation. Our approach is verified on challenging public datasets and achieves state-of-the-art performance. | In principle, a network can adopt any differentiable functions and be optimized end-to-end using gradient-descent. @cite_2 proposed a differentiable render to generate RGB image given appearance, geometry and camera parameters. This generative process can be used in neutral network. @cite_1 leverages the fact that associated feature computation is piecewise differentiable, therefore Histogram of Oriented Gradient (HOG) feature can be extracted in a differentiable way. @cite_25 reformulates the split function in decision trees as a Bernoulli routing probability. The decision trees are plugged at the end of a neural network and trained together. As we know, we are the first to adopt a generative hand model in deep learning. | {
"cite_N": [
"@cite_1",
"@cite_25",
"@cite_2"
],
"mid": [
"2949212545",
"",
"183071939"
],
"abstract": [
"The Histogram of Oriented Gradient (HOG) descriptor has led to many advances in computer vision over the last decade and is still part of many state of the art approaches. We realize that the associated feature computation is piecewise differentiable and therefore many pipelines which build on HOG can be made differentiable. This lends to advanced introspection as well as opportunities for end-to-end optimization. We present our implementation of @math HOG based on the auto-differentiation toolbox Chumpy and show applications to pre-image visualization and pose estimation which extends the existing differentiable renderer OpenDR pipeline. Both applications improve on the respective state-of-the-art HOG approaches.",
"",
"Inverse graphics attempts to take sensor data and infer 3D geometry, illumination, materials, and motions such that a graphics renderer could realistically reproduce the observed scene. Renderers, however, are designed to solve the forward process of image synthesis. To go in the other direction, we propose an approximate differentiable renderer (DR) that explicitly models the relationship between changes in model parameters and image observations. We describe a publicly available OpenDR framework that makes it easy to express a forward graphics model and then automatically obtain derivatives with respect to the model parameters and to optimize over them. Built on a new auto-differentiation package and OpenGL, OpenDR provides a local optimization method that can be incorporated into probabilistic programming frameworks. We demonstrate the power and simplicity of programming with OpenDR by using it to solve the problem of estimating human body shape from Kinect depth and RGB data."
]
} |
1606.06692 | 2468857815 | The uplink data arriving at the Machine-to-Machine (M2M) Application Server (AS) via M2M Aggregators (MAs) is fairly heterogeneous along several dimensions such as maximum tolerable packet delay, payload size and arrival rate, thus necessitating the design of Quality-of-Service (QoS) aware packet scheduler. In this paper, we classify the M2M uplink data into multiple QoS classes and use sigmoidal function to map the delay requirements of each class onto utility functions. We propose a proportionally fair delay-optimal multiclass packet scheduler at AS that maximizes a system utility metric. We note that the average class delay under any work-conserving scheduling policy can be realized by appropriately time-sharing between all possible preemptive priority policies. Therefore the optimal scheduler is determined using an iterative process to determine the optimal time-sharing between all priority scheduling policies, such that it results in maximum system utility. The proposed scheduler can be implemented online with reduced complexity due to the iterative optimization process. We then extend this work to determine jointly optimal MA-AS channel allocation and packet scheduling scheme at the MAs and AS. We first formulate a joint optimization problem that is solved centrally at the AS and then propose a low complexity distributed optimization problem solved independently at MAs and AS. We show that the distributed optimization solution converges quickly to the centralized optimization result with minimal information exchange overhead between MAs and AS. Using Monte-Carlo simulations, we verify the optimality of the proposed scheduler and show that it outperforms other state-of-the-art packet schedulers such as weighted round robin, max-weight scheduler etc. Another desirable feature of proposed scheduler is low delay jitter for delay-sensitive traffic. | Another line of work considers the design of packet schedulers for real-time control applications (see @cite_23 and references therein) and assume hybrid task sets comprising of periodic tasks with hard deadlines and aperiodic tasks with hard, soft or non-real-time requirements. The goal of these schedulers is to guarantee schedulability for critical tasks (with hard deadlines) while providing a best-effort service for other tasks. The periodic tasks from different flows are processed by assuming a fixed-priority assignment and with service deadline equal to the arrival period of that flow, while the aperiodic tasks from various flows are aggregated and served using First-Come-First-Serve policy. Some of the popular algorithms include Rate Monotonic scheduling @cite_27 , Polling Server, Deferrable Server @cite_22 , Priority Exchange @cite_24 , Sporadic Server @cite_5 and Slack-Stealing @cite_1 . However for most of the practical M2M systems, the arrival rate and delay requirements are potentially different for each aperiodic flow and thus aggregating the aperiodic flows will degrade the overall delay performance. Furthermore, these schemes define a , a virtual periodic task with certain computation time and priority level to process outstanding aperiodic tasks. But the determination of delay-optimal period and capacity for given traffic characteristics is not clear. | {
"cite_N": [
"@cite_22",
"@cite_1",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_5"
],
"mid": [
"2140936050",
"2160498682",
"",
"2109488193",
"2137784941",
"2570030068"
],
"abstract": [
"Most existing scheduling algorithms for hard real-time systems apply either to periodic tasks or aperiodic tasks but not to both. In practice, real-time systems require an integrated, consistent approach to scheduling that is able to simultaneously meet the timing requirements of hard deadline periodic tasks, hard deadline aperiodic (alert-class) tasks, and soft deadline aperiodic tasks. This paper introduces the Deferrable Server (DS) algorithm which will be shown to provide improved aperiodic response time performance over traditional background and polling approaches. Taking advantage of the fact that, typically, there is no benefit in early completion of the periodic tasks, the Deferrable Server (DS) algorithm assigns higher priority to the aperiodic tasks up until the point where the periodic tasks would start to miss their deadlines. Guaranteed alert-class aperiodic service and greatly reduced response times for soft deadline aperiodic tasks are important features of the DS algorithm, and both are obtained with the hard deadlines of the periodic tasks still being guaranteed. The results of a simulation study performed to evaluate the response time performance of the new algorithm against traditional background and polling approaches are presented. In all cases, the response times of aperiodic tasks are significantly reduced (often by an order of magnitude) while still maintaining guaranteed periodic task deadlines. >",
"A novel algorithm for servicing soft deadline aperiodic tasks in a real-time system in which hard deadline periodic tasks are scheduled using a fixed priority algorithm is presented. This algorithm is proved to be optimal in the sense that it provides the shortest aperiodic response time among all possible aperiodic service methods. Simulation studies show that it offers substantial performance improvements over current approaches, including the sporadic server algorithm. Moreover, standard queuing formulas can be used to predict aperiodic response times over a wide range of conditions. The algorithm can be extended to schedule hard deadline aperiodics and to efficiently reclaim unused periodic service time when periodic tasks have stochastic execution times. >",
"",
"The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service. It is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets. It is also shown that full processor utilization can be achieved by dynamically assigning priorities on the basis of their current deadlines. A combination of these two scheduling techniques is also discussed.",
"Hard Real-Time Computing Systems: Predictable Scheduling Algorithms and Applications is a basic treatise on real-time computing, with particular emphasis on predictable scheduling algorithms. It introduces the fundamental concepts of real-time computing, illustrates the most significant results in the field, and provides the essential methodologies for designing predictable computing systems which can be used to support critical control applications. This volume serves as a textbook for advanced level courses on the topic. Each chapter provides basic concepts, which are followed by algorithms that are illustrated with concrete examples, figures and tables. Exercises are included with each chapter and solutions are given at the end of the book. The book also provides an excellent reference for those interested in real-time computing for designing and or developing predictable control applications.",
"This thesis develops the Sporadic Server (SS) algorithm for scheduling aperiodic tasks in real-time systems. The SS algorithm is an extension of the rate monotonic algorithm which was designed to schedule periodic tasks. This thesis demonstrates that the SS algorithm is able to guarantee deadlines for hard-deadline aperiodic tasks and provide good responsiveness for soft-deadline aperiodic tasks while avoiding the schedulability penalty and implementation complexity of previous aperiodic service algorithms. It is also proven that the aperiodic servers created by the SS algorithm can be treated as equivalently-sized periodic tasks when assessing schedulability. This allows all the scheduling theories developed for the rate monotonic algorithm to be used to schedule aperiodic tasks. For scheduling aperiodic and periodic tasks that share data, this thesis defines the interactions and schedulability impact of using the SS algorithm with the priority inheritance protocols. For scheduling hard-deadline tasks with short deadlines, an extension of the rate monotonic algorithm and analysis is developed. To predict performance of the SS algorithm, this thesis develops models and equations that allow the use of standard queueing theory models to predict the average response time of soft-deadline aperiodic tasks serviced with a high-priority sporadic server. Implementation methods are also developed to support the SS algorithm in Ada and on the Futurebus+."
]
} |
1606.06873 | 2471825207 | Whilst affective responses to various forms and genres of multimedia content have been well researched, precious few studies have investigated the combined impact that multimedia system parameters and human factors have on affect. Consequently, in this paper we explore the role that two primordial dimensions of human factors - personality and culture - in conjunction with system factors - frame rate, resolution, and bit rate - have on user affect and enjoyment of multimedia presentations. To this end, a two-site, cross-cultural study was undertaken, the results of which produced three predictve models. Personality and Culture traits were shown statistically to represent 5.6 of the variance in positive affect, 13.6 in negative affect and 9.3 in enjoyment. The correlation between affect and enjoyment, was significant. Predictive modeling incorporating human factors showed about 8 , 7 and 9 improvement in predicting positive affect, negative affect and enjoyment respectively when compared to models trained only on system factors. Results and analysis indicate the significant role played by human factors in influencing affect that users experience while watching multimedia. | There are several studies which aim to predict affective responses to multimedia (see @cite_47 @cite_10 @cite_15 for a thorough review). Some focus on distilling the influence of specific cinematographic theories @cite_30 , types of segment and shot @cite_1 , the use of colour @cite_23 and connotative space @cite_36 . Apart from the works mentioned above, there has been research focused on modeling the different audio-visual features to predict emotions @cite_41 @cite_52 @cite_72 @cite_69 @cite_82 @cite_53 . The features used in this work are inspired by those used in the literature, along with certain content-based descriptors which have been shown to perform well in several content understanding tasks @cite_29 @cite_5 . | {
"cite_N": [
"@cite_30",
"@cite_69",
"@cite_15",
"@cite_36",
"@cite_41",
"@cite_53",
"@cite_29",
"@cite_1",
"@cite_52",
"@cite_72",
"@cite_23",
"@cite_5",
"@cite_47",
"@cite_10",
"@cite_82"
],
"mid": [
"2114948194",
"",
"",
"2124801089",
"",
"",
"2048783874",
"2171525877",
"",
"",
"1984038731",
"2134670479",
"2117645142",
"",
""
],
"abstract": [
"The apparent difficulty in assessing emotions elicited by movies and the undeniable high variability in subjects' emotional responses to film content have been recently tackled by exploring film connotative properties: the set of shooting and editing conventions that help in transmitting meaning to the audience. Connotation provides an intermediate representation that exploits the objectivity of audiovisual descriptors to predict the subjective emotional reaction of single users. This is done without the need of registering users' physiological signals. It is not done by employing other people's highly variable emotional rates, but by relying on the intersubjectivity of connotative concepts and on the knowledge of user's reactions to similar stimuli. This paper extends previous work by extracting audiovisual and film grammar descriptors and, driven by users' rates on connotative properties, creates a shared framework where movie scenes are placed, compared, and recommended according to connotation. We evaluate the potential of the proposed system by asking users to assess the ability of connotation in suggesting film content able to target their affective requests.",
"",
"",
"The problem of relating media content to users' affective responses is here addressed. Previous work suggests that a direct mapping of audio-visual properties into emotion categories elicited by films is rather difficult, due to the high variability of individual reactions. To reduce the gap between the objective level of video features and the subjective sphere of emotions, we propose to shift the representation towards the connotative properties of movies, in a space inter-subjectively shared among users. Consequently, the connotative space allows to define, relate, and compare affective descriptions of film videos on equal footing. An extensive test involving a significant number of users watching famous movie scenes suggests that the connotative space can be related to affective categories of a single user. We apply this finding to reach high performance in meeting user's emotional preferences.",
"",
"",
"A picture is worth one thousand words, but what words should be used to describe the sentiment and emotions conveyed in the increasingly popular social multimedia? We demonstrate a novel system which combines sound structures from psychology and the folksonomy extracted from social multimedia to develop a large visual sentiment ontology consisting of 1,200 concepts and associated classifiers called SentiBank. Each concept, defined as an Adjective Noun Pair (ANP), is made of an adjective strongly indicating emotions and a noun corresponding to objects or scenes that have a reasonable prospect of automatic detection. We believe such large-scale visual classifiers offer a powerful mid-level semantic representation enabling high-level sentiment analysis of social multimedia. We demonstrate novel applications made possible by SentiBank including live sentiment prediction of social media and visualization of visual content in a rich intuitive semantic space.",
"In this paper we propose a novel aesthetic model emphasizing psycho-visual statistics extracted from multiple levels in contrast to earlier approaches that rely only on descriptors suited for image recognition or based on photographic principles. At the lowest level, we determine dark-channel, sharpness and eye-sensitivity statistics over rectangular cells within a frame. At the next level, we extract Sentibank features (1,200 pre-trained visual classifiers) on a given frame, that invoke specific sentiments such as \"colorful clouds\", \"smiling face\" etc. and collect the classifier responses as frame-level statistics. At the topmost level, we extract trajectories from video shots. Using viewer's fixation priors, the trajectories are labeled as foreground, and background camera on which statistics are computed. Additionally, spatio-temporal local binary patterns are computed that capture texture variations in a given shot. Classifiers are trained on individual feature representations independently. On thorough evaluation of 9 different types of features, we select the best features from each level -- dark channel, affect and camera motion statistics. Next, corresponding classifier scores are integrated in a sophisticated low-rank fusion framework to improve the final prediction scores. Our approach demonstrates strong correlation with human prediction on 1,000 broadcast quality videos released by NHK as an aesthetic evaluation dataset.",
"",
"",
"Conversational social video is becoming a worldwide trend. Video communication allows a more natural interaction, when aiming to share personal news, ideas, and opinions, by transmitting both verbal content and nonverbal behavior. However, the automatic analysis of natural mood is challenging, since it is displayed in parallel via voice, face, and body. This paper presents an automatic approach to infer 11 natural mood categories in conversational social video using single and multimodal nonverbal cues extracted from video blogs (vlogs) from YouTube. The mood labels used in our work were collected via crowdsourcing. Our approach is promising for several of the studied mood categories. Our study demonstrates that although multimodal features perform better than single channel features, not always all the available channels are needed to accurately discriminate mood in videos.",
"Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success. This may be because current deep features trained from ImageNet are not competitive enough for such tasks. Here, we introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity. Using CNN, we learn deep features for scene recognition tasks, and establish new state-of-the-art results on several scene-centric datasets. A visualization of the CNN layers' responses allows us to show differences in the internal representations of object-centric and scene-centric networks.",
"This survey describes recent progress in the field of Affective Computing (AC), with a focus on affect detection. Although many AC researchers have traditionally attempted to remain agnostic to the different emotion theories proposed by psychologists, the affective technologies being developed are rife with theoretical assumptions that impact their effectiveness. Hence, an informed and integrated examination of emotion theories from multiple areas will need to become part of computing practice if truly effective real-world systems are to be achieved. This survey discusses theoretical perspectives that view emotions as expressions, embodiments, outcomes of cognitive appraisal, social constructs, products of neural circuitry, and psychological interpretations of basic feelings. It provides meta-analyses on existing reviews of affect detection systems that focus on traditional affect detection modalities like physiology, face, and voice, and also reviews emerging research on more novel channels such as text, body language, and complex multimodal systems. This survey explicitly explores the multidisciplinary foundation that underlies all AC applications by describing how AC researchers have incorporated psychological theories of emotion and how these theories affect research questions, methods, results, and their interpretations. In this way, models and methods can be compared, and emerging insights from various disciplines can be more expertly integrated.",
"",
""
]
} |
1606.06873 | 2471825207 | Whilst affective responses to various forms and genres of multimedia content have been well researched, precious few studies have investigated the combined impact that multimedia system parameters and human factors have on affect. Consequently, in this paper we explore the role that two primordial dimensions of human factors - personality and culture - in conjunction with system factors - frame rate, resolution, and bit rate - have on user affect and enjoyment of multimedia presentations. To this end, a two-site, cross-cultural study was undertaken, the results of which produced three predictve models. Personality and Culture traits were shown statistically to represent 5.6 of the variance in positive affect, 13.6 in negative affect and 9.3 in enjoyment. The correlation between affect and enjoyment, was significant. Predictive modeling incorporating human factors showed about 8 , 7 and 9 improvement in predicting positive affect, negative affect and enjoyment respectively when compared to models trained only on system factors. Results and analysis indicate the significant role played by human factors in influencing affect that users experience while watching multimedia. | Certain traits are considerably influenced by the cultural background to which an individual belongs. Shared conceptions and collective norms characterize a local environment, and thereby shape the perception and cognition of those who associate with it. Differences in culture have been studied by @cite_20 . Six cultural traits constitute the model -- masculinity, individualism, uncertainty avoidance index, pragmatism, power distance, and indulgence. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2144446652"
],
"abstract": [
"This book reveals: * The unexamined rules behind the thoughts and emotions of people of different cultures * Ways in which cultures differ in the areas of collectivism individualism, assertiveness modesty, tolerance for ambiguity, and deferment of gratification * How organizational cultures differ from national cultures, and how they can be managed"
]
} |
1606.06724 | 2463993963 | We present a framework for efficient perceptual inference that explicitly reasons about the segmentation of its inputs and features. Rather than being trained for any specific segmentation, our framework learns the grouping process in an unsupervised manner or alongside any supervised task. By enriching the representations of a neural network, we enable it to group the representations of different objects in an iterative manner. By allowing the system to amortize the iterative inference of the groupings, we achieve very fast convergence. In contrast to many other recently proposed methods for addressing multi-object scenes, our system does not assume the inputs to be images and can therefore directly handle other modalities. For multi-digit classification of very cluttered images that require texture segmentation, our method offers improved classification performance over convolutional networks despite being fully connected. Furthermore, we observe that our system greatly improves on the semi-supervised result of a baseline Ladder network on our dataset, indicating that segmentation can also improve sample efficiency. | Attention models have recently become very popular, and similar to perceptual grouping they help in dealing with complex structured inputs. These approaches are not, however, mutually exclusive and can benefit from each other. Overt attention models @cite_23 @cite_24 control a window (fovea) to focus on relevant parts of the inputs. Two of their limitations are that they are mostly tailored to the visual domain and are usually only suited to objects that are roughly the same shape as the window. But their ability to limit the field of view can help to reduce the complexity of the target problem and thus also help segmentation. Soft attention mechanisms @cite_2 @cite_16 @cite_29 on the other hand use some form of top-down feedback to suppress inputs that are irrelevant for a given task. These mechanisms have recently gained popularity, first in machine translation @cite_0 and then for many other problems such as image caption generation @cite_35 . Because they re-weigh all the inputs based on their relevance, they could benefit from a perceptual grouping process that can refine the precise boundaries of attention. | {
"cite_N": [
"@cite_35",
"@cite_29",
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_16"
],
"mid": [
"2950178297",
"1886431150",
"2327562811",
"2133564696",
"2059054542",
"185620388",
"1805632810"
],
"abstract": [
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"We demonstrate that selective attention can improve learning. Considerably fewer samples are needed to learn a source separation problem when the inputs are pre-segmented by the proposed model. The model combines biased-competition model for attention with a habituation mechanism which allows the focus of attention to switch from one object to another. The criteria for segmenting objects are estimated from data and are shown to generalise to new objects.",
"We present a framework for efficient inference in structured image models that explicitly reason about objects. We achieve this by performing probabilistic inference using a recurrent neural network that attends to scene elements and processes them one at a time. Crucially, the model itself learns to choose the appropriate number of inference steps. We use this scheme to learn to perform inference in partially specified 2D models (variable-sized variational auto-encoders) and fully specified 3D models (probabilistic renderers). We show that such models learn to identify multiple objects - counting, locating and classifying the elements of a scene - without any supervision, e.g., decomposing 3D images with various numbers of objects in a single forward pass of a neural network. We further show that the networks produce accurate inferences when compared to supervised counterparts, and that their structure leads to improved generalization.",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"We propose a method that takes observations of a random vector as input, and learns to segment each observation into two disjoint parts. We show how to use the internal coherence of segments to learn to segment almost any random variable. Coherence is formalized using the principle of autoprediction, i.e. two elements are similar if the observed values are similar to the predictions given by the elements for each other. To obtain a principled model and method, we formulate a generative model and show how it can be estimated in the limit of zero noise. The ensuing method is an abstract, adaptive (learning) generalization of well-known methods for image segmentation. It enables segmentation of random vectors in cases where intuitive prior information necessary for conventional segmentation methods is not available.",
"Let m be the number of time-varying variables for storing temporal events in a fully recurrent sequence processing network. Let Rtime be the ratio between the number of operations per time step (for an exact gradient based supervised sequence learning algorithm), and m. Let Rspace be the ratio between the maximum number of storage cells necessary for learning arbitrary sequences, and m. With conventional recurrent nets, m equals the number of units. With the popular ‘real time recurrent learning algorithm’ (RTRL), Rtime = O(m3) and Rapace = O(m2). With ‘back-propagation through time’ (BPTT), Rtime = O(m) (much better than with RTRL) and Rspace is infinite (much worse than with RTRL). The contribution of this paper is a novel fully recurrent network and a corresponding exact gradient based learning algorithm with Rtime = O(m) (as good as with BPTT) and Rspace = O(m2) (as good as with RTRL)",
"Visual attention poses a mechanism for the selection of behaviourally relevant information from natural scenes which usually contain multiple objects. The aim of the present work is to formulate a neurodynamical model of selective visual attention based on the \"biased competition hypothesis\" and structured in several network modules which can be related with the different areas of the dorsal and ventral path of the visual cortex. Spatial and object attention are accomplished by a multiplicative gain control that emerges dynamically through intercortical mutual biased coupling. We also include in our computational model the \"resolution hypothesis\" in order to explain the role of the neurodynamics control of spatial resolution evidenced in psychophysical experiments. We propose that V1 neurons have different latencies depending on the spatial frequency to which they respond more sensitively. In concrete, we pose that V1 neurons sensitive to low spatial frequency are faster than V1 neurons sensitive to high spatial frequency. In this sense, a scene is first predominantly analysed at a coarse resolution level and the dynamics enhances subsequently the resolution at the location of an object until the object is identified."
]
} |
1606.06724 | 2463993963 | We present a framework for efficient perceptual inference that explicitly reasons about the segmentation of its inputs and features. Rather than being trained for any specific segmentation, our framework learns the grouping process in an unsupervised manner or alongside any supervised task. By enriching the representations of a neural network, we enable it to group the representations of different objects in an iterative manner. By allowing the system to amortize the iterative inference of the groupings, we achieve very fast convergence. In contrast to many other recently proposed methods for addressing multi-object scenes, our system does not assume the inputs to be images and can therefore directly handle other modalities. For multi-digit classification of very cluttered images that require texture segmentation, our method offers improved classification performance over convolutional networks despite being fully connected. Furthermore, we observe that our system greatly improves on the semi-supervised result of a baseline Ladder network on our dataset, indicating that segmentation can also improve sample efficiency. | Our work is primarily built upon a line of research based on the concept that the brain uses synchronization of neuronal firing to bind object representations together. This view was introduced by and has inspired many early works on oscillations in neural networks (see the survey @cite_36 for a summary). Simulating the oscillations explicitly is costly and does not mesh well with modern neural network architectures (but see @cite_3 ). Rather, complex values have been used to model oscillating activations using the phase as soft tags for synchronization @cite_20 @cite_17 . In our model, we further abstract them by using discretized synchronization slots (our groups). It is most similar to the models of , and . However, our work is the first to combine this with denoising autoencoders in an end-to-end trainable fashion. | {
"cite_N": [
"@cite_36",
"@cite_20",
"@cite_3",
"@cite_17"
],
"mid": [
"1526844937",
"2126142792",
"2067788024",
"1526708997"
],
"abstract": [
"A summary of brain theory is given so far as it is contained within the framework of Localization Theory. Diffculties of this \"conventional theory\" are traced back to a specific deficiency: there is no way to express relations between active cells (as for instance their representing parts of the same object). A new theory is proposed to cure this deficiency. It introduces a new kind of dynamical control, termed synaptic modulation, according to which synapses switch between a conducting and a non- conducting state. The dynamics of this variable is controlled on a fast time scale by correlations in the temporal fine structure of cellular signals. Furthermore, conventional synaptic plasticity is replaced by a refined version. Synaptic modulation and plasticity form the basis for short-term and long-term memory, respectively. Signal correlations, shaped by the variable network, express structure and relationships within objects. In particular, the figure-ground problem may be solved in this way. Synaptic modulation introduces flexibility into cerebral networks which is necessary to solve the invariance problem. Since momentarily useless connections are deactivated, interference between different memory traces can be reduced, and memory capacity increased, in comparison with conventional associative memory.",
"In this paper, we present a novel network to separate mixtures of inputs that have been previously learned. A significant capability of the network is that it segments the components of each input object that most contribute to its classification. The network consists of amplitude-phase units that can synchronize their dynamics, so that separation is determined by the amplitude of units in an output layer, and segmentation by phase similarity between input and output layer units. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. Moreover, efficient segmentation can be achieved even when there is considerable superposition of the inputs. The network dynamics are derived from an objective function that rewards sparse coding in the generalized amplitude-phase variables. We argue that this objective function can provide a possible formal interpretation of the binding problem and that the implementation of the network architecture and dynamics is biologically plausible.",
"Abstract In this paper we present a novel approach to model perceptual grouping based on phase and frequency synchronization in a network of coupled Kuramoto oscillators. Transferring the grouping concept from the Competitive Layer Model (CLM) to a network of Kuramoto oscillators, we preserve the excellent grouping capabilities of the CLM, while dramatically improving the convergence rate, robustness to noise, and computational performance, which is verified in a series of artificial grouping experiments and with real-world data.",
"Deep learning has recently led to great successes in tasks such as image recognition (e.g , 2012). However, deep networks are still outmatched by the power and versatility of the brain, perhaps in part due to the richer neuronal computations available to cortical circuits. The challenge is to identify which neuronal mechanisms are relevant, and to find suitable abstractions to model them. Here, we show how aspects of spike timing, long hypothesized to play a crucial role in cortical information processing, could be incorporated into deep networks to build richer, versatile representations. We introduce a neural network formulation based on complex-valued neuronal units that is not only biologically meaningful but also amenable to a variety of deep learning frameworks. Here, units are attributed both a firing rate and a phase, the latter indicating properties of spike timing. We show how this formulation qualitatively captures several aspects thought to be related to neuronal synchrony, including gating of information processing and dynamic binding of distributed object representations. Focusing on the latter, we demonstrate the potential of the approach in several simple experiments. Thus, neuronal synchrony could be a flexible mechanism that fulfills multiple functional roles in deep networks."
]
} |
1606.06724 | 2463993963 | We present a framework for efficient perceptual inference that explicitly reasons about the segmentation of its inputs and features. Rather than being trained for any specific segmentation, our framework learns the grouping process in an unsupervised manner or alongside any supervised task. By enriching the representations of a neural network, we enable it to group the representations of different objects in an iterative manner. By allowing the system to amortize the iterative inference of the groupings, we achieve very fast convergence. In contrast to many other recently proposed methods for addressing multi-object scenes, our system does not assume the inputs to be images and can therefore directly handle other modalities. For multi-digit classification of very cluttered images that require texture segmentation, our method offers improved classification performance over convolutional networks despite being fully connected. Furthermore, we observe that our system greatly improves on the semi-supervised result of a baseline Ladder network on our dataset, indicating that segmentation can also improve sample efficiency. | Another closely related line of research @cite_12 @cite_22 has focused on multi-causal modeling of the inputs. Many of the works in that area @cite_5 @cite_4 @cite_14 @cite_30 build upon Restricted Boltzmann Machines. Each input is modeled as a mixture model with a separate latent variable for each object. Because exact inference is intractable, these models approximate the posterior with some form of expectation maximization @cite_32 or sampling procedure. Our assumptions are very similar to these approaches, but we allow the model to learn the amortized inference directly (more in line with ). | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_32",
"@cite_5",
"@cite_12"
],
"mid": [
"2180332918",
"2141779515",
"2054814877",
"2138137770",
"2049633694",
"2171145682",
"2042912927"
],
"abstract": [
"We present a generative model of images based on layering, in which image layers are individually generated, then composited from front to back. We are thus able to factor the appearance of an image into the appearance of individual objects within the image --- and additionally for each individual object, we can factor content from pose. Unlike prior work on layered models, we learn a shape prior for each object layer, allowing the model to tease out which object is in front by looking for a consistent shape, without needing access to motion cues or any labeled data. We show that ordinary stochastic gradient variational bayes (SGVB), which optimizes our fully differentiable lower-bound on the log-likelihood, is sufficient to learn an interpretable representation of images. Finally we present experiments demonstrating the effectiveness of the model for inferring foreground and background objects in images.",
"Unsupervised feature learning has emerged as a promising tool in learning representations from unlabeled data. However, it is still challenging to learn useful high-level features when the data contains a significant amount of irrelevant patterns. Although feature selection can be used for such complex data, it may fail when we have to build a learning system from scratch (i.e., starting from the lack of useful raw features). To address this problem, we propose a point-wise gated Boltzmann machine, a unified generative model that combines feature learning and feature selection. Our model performs not only feature selection on learned high-level features (i.e., hidden units), but also dynamic feature selection on raw features (i.e., visible units) through a gating mechanism. For each example, the model can adaptively focus on a variable subset of visible nodes corresponding to the task-relevant patterns, while ignoring the visible units corresponding to the task-irrelevant patterns. In experiments, our method achieves improved performance over state-of-the-art in several visual recognition benchmarks.",
"While Boltzmann Machines have been successful at unsupervised learning and density modeling of images and speech data, they can be very sensitive to noise in the data. In this paper, we introduce a novel model, the Robust Boltzmann Machine (RoBM), which allows Boltzmann Machines to be robust to corruptions. In the domain of visual recognition, the RoBM is able to accurately deal with occlusions and noise by using multiplicative gating to induce a scale mixture of Gaussians over pixels. Image denoising and in-painting correspond to posterior inference in the RoBM. Our model is trained in an unsupervised fashion with unlabeled noisy data and can learn the spatial structure of the occluders. Compared to standard algorithms, the RoBM is significantly better at recognition and denoising on several face databases.",
"Many perceptual models and theories hinge on treating objects as a collection of constituent parts. When applying these approaches to data, a fundamental problem arises: how can we determine what are the parts? We attack this problem using learning, proposing a form of generative latent factor model, in which each data dimension is allowed to select a different factor or part as its explanation. This approach permits a range of variations that posit different models for the appearance of a part. Here we provide the details for two such models: a discrete and a continuous one. Further, we show that this latent factor model can be extended hierarchically to account for correlations between the appearances of different parts. This permits modelling of data consisting of multiple categories, and learning these categories simultaneously with the parts when they are unobserved. Experiments demonstrate the ability to learn parts-based representations, and categories, of facial images and user-preference data.",
"",
"Computer vision has grown tremendously in the past two decades. Despite all efforts, existing attempts at matching parts of the human visual system's extraordinary ability to understand visual scenes lack either scope or power. By combining the advantages of general low-level generative models and powerful layer-based and hierarchical models, this work aims at being a first step toward richer, more flexible models of images. After comparing various types of restricted Boltzmann machines (RBMs) able to model continuous-valued data, we introduce our basic model, the masked RBM, which explicitly models occlusion boundaries in image patches by factoring the appearance of any patch region from its shape. We then propose a generative model of larger images using a field of such RBMs. Finally, we discuss how masked RBMs could be stacked to form a deep model able to generate more complicated structures and suitable for various tasks such as segmentation or object recognition.",
"This paper presents a formulation for unsupervised learning of clusters reflecting multiple causal structure in binary data. Unlike the \"hard\" k-means clustering algorithm and the \"soft\" mixture model, each of which assumes that a single hidden event generates each data point, a multiple cause model accounts for observed data by combining assertions from many hidden causes, each of which can pertain to varying degree to any subset of the observable dimensions. We employ an objective function and iterative gradient descent learning algorithm resembling the conventional mixture model. A crucial issue is the mixing function for combining beliefs from different cluster centers in order to generate data predictions whose errors are minimized both during recognition and learning. The mixing function constitutes a prior assumption about underlying structural regularities of the data domain; we demonstrate a weakness inherent to the popular weighted sum followed by sigmoid squashing, and offer alternative forms of the nonlinearity for two types of data domain. Results are presented demonstrating the algorithm's ability successfully to discover coherent multiple causal representations in several experimental data sets."
]
} |
1606.06724 | 2463993963 | We present a framework for efficient perceptual inference that explicitly reasons about the segmentation of its inputs and features. Rather than being trained for any specific segmentation, our framework learns the grouping process in an unsupervised manner or alongside any supervised task. By enriching the representations of a neural network, we enable it to group the representations of different objects in an iterative manner. By allowing the system to amortize the iterative inference of the groupings, we achieve very fast convergence. In contrast to many other recently proposed methods for addressing multi-object scenes, our system does not assume the inputs to be images and can therefore directly handle other modalities. For multi-digit classification of very cluttered images that require texture segmentation, our method offers improved classification performance over convolutional networks despite being fully connected. Furthermore, we observe that our system greatly improves on the semi-supervised result of a baseline Ladder network on our dataset, indicating that segmentation can also improve sample efficiency. | Since recurrent neural networks (RNNs) are general purpose computers, they can in principle implement arbitrary computable types of temporary variable binding @cite_6 @cite_27 , unsupervised segmentation @cite_15 , and internal @cite_2 and external attention @cite_23 . For example, an RNN with fast weights @cite_2 can rapidly associate or bind the patterns to which the RNN currently attends. Similar approaches even allow for met alearning @cite_8 , that is, learning a learning algorithm. , for example, learned fast online learning algorithms for the class of all quadratic functions of two variables. Unsupervised segmentation could therefore in principle be learned by any RNN as a by-product of data compression or any other given task. | {
"cite_N": [
"@cite_8",
"@cite_6",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_15"
],
"mid": [
"1891300059",
"2089217417",
"",
"2059054542",
"185620388",
"2036317923"
],
"abstract": [
"Weight modifications in traditional neural nets are computed by hard-wired algorithms. Without exception, all previous weight change algorithms have many specific limitations. Is it (in principle) possible to overcome limitations of hard-wired algorithms by allowing neural nets to run and improve their own weight change algorithms? This paper constructively demonstrates that the answer (in principle) is ‘yes’. I derive an initial gradientbased sequence learning algorithm for a ‘self-referential’ recurrent network that can ‘speak’ about its own weight matrix in terms of activations. It uses some of its input and output units for observing its own errors and for explicitly analyzing and modifying its own weight matrix, including those parts of the weight matrix responsible for analyzing and modifying the weight matrix. The result is the first ‘introspective’ neural net with explicit potential control over all of its own adaptive parameters. A disadvantage of the algorithm is its high computational complexity per time step which is independent of the sequence length and equals O(nconnlognconn), where riconn is the number of connections. Another disadvantage is the high number of local minima of the unusually complex error surface. The purpose of this paper, however, is not to come up with the most efficient ‘introspective’ or ‘self-referential’ weight change algorithm, but to show that such algorithms are possible at all.",
"Previous algorithms for supervised sequence learning are based on dynamic recurrent networks. This paper describes an alternative class of gradient-based systems consisting of two feedforward nets that learn to deal with temporal sequences using fast weights: The first net learns to produce context-dependent weight changes for the second net whose weights may vary very quickly. The method offers the potential for STM storage efficiency: A single weight (instead of a full-fledged unit) may be sufficient for storing temporal information. Various learning methods are derived. Two experiments with unknown time delays illustrate the approach. One experiment shows how the system can be used for adaptive temporary variable binding.",
"",
"We propose a method that takes observations of a random vector as input, and learns to segment each observation into two disjoint parts. We show how to use the internal coherence of segments to learn to segment almost any random variable. Coherence is formalized using the principle of autoprediction, i.e. two elements are similar if the observed values are similar to the predictions given by the elements for each other. To obtain a principled model and method, we formulate a generative model and show how it can be estimated in the limit of zero noise. The ensuing method is an abstract, adaptive (learning) generalization of well-known methods for image segmentation. It enables segmentation of random vectors in cases where intuitive prior information necessary for conventional segmentation methods is not available.",
"Let m be the number of time-varying variables for storing temporal events in a fully recurrent sequence processing network. Let Rtime be the ratio between the number of operations per time step (for an exact gradient based supervised sequence learning algorithm), and m. Let Rspace be the ratio between the maximum number of storage cells necessary for learning arbitrary sequences, and m. With conventional recurrent nets, m equals the number of units. With the popular ‘real time recurrent learning algorithm’ (RTRL), Rtime = O(m3) and Rapace = O(m2). With ‘back-propagation through time’ (BPTT), Rtime = O(m) (much better than with RTRL) and Rspace is infinite (much worse than with RTRL). The contribution of this paper is a novel fully recurrent network and a corresponding exact gradient based learning algorithm with Rtime = O(m) (as good as with BPTT) and Rspace = O(m2) (as good as with RTRL)",
"Previous neural network learning algorithms for sequence processing are computationally expensive and perform poorly when it comes to long time lags. This paper first introduces a simple principle for reducing the descriptions of event sequences without loss of information. A consequence of this principle is that only unexpected inputs can be relevant. This insight leads to the construction of neural architectures that learn to “divide and conquer” by recursively decomposing sequences. I describe two architectures. The first functions as a self-organizing multilevel hierarchy of recurrent networks. The second, involving only two recurrent networks, tries to collapse a multilevel predictor hierarchy into a single recurrent net. Experiments show that the system can require less computation per time step and many fewer training sequences than conventional training algorithms for recurrent nets."
]
} |
1606.06724 | 2463993963 | We present a framework for efficient perceptual inference that explicitly reasons about the segmentation of its inputs and features. Rather than being trained for any specific segmentation, our framework learns the grouping process in an unsupervised manner or alongside any supervised task. By enriching the representations of a neural network, we enable it to group the representations of different objects in an iterative manner. By allowing the system to amortize the iterative inference of the groupings, we achieve very fast convergence. In contrast to many other recently proposed methods for addressing multi-object scenes, our system does not assume the inputs to be images and can therefore directly handle other modalities. For multi-digit classification of very cluttered images that require texture segmentation, our method offers improved classification performance over convolutional networks despite being fully connected. Furthermore, we observe that our system greatly improves on the semi-supervised result of a baseline Ladder network on our dataset, indicating that segmentation can also improve sample efficiency. | The recurrent architecture most similar to the Tagger is the (NAP; ) -- a convolutional neural network augmented with lateral connections which help resolve local ambiguities and feedback connections that allow incorporation of high-level information. In early pioneering work the NAP was trained for iterative image binarization @cite_9 and iterative image denoising @cite_13 , much akin to the setup we use. Being recurrent, the NAP layers too, could in principle learn a perceptual grouping as a byproduct. That does not, however, imply that every RNN will, through learning, easily discover and implement this tool. The main improvement that our framework adds is an explicit mechanism for the network to split the input into multiple representations and thus quickly and efficiently learn a grouping mechanism. We believe this special case of computation to be important enough for many real-world tasks to justify this added complexity. | {
"cite_N": [
"@cite_9",
"@cite_13"
],
"mid": [
"2115098305",
"2163715362"
],
"abstract": [
"In this paper the binarization of matrix codes is investigated as an application of supervised learning of image processing tasks using a recurrent version of the Neural Ab- straction Pyramid. The desired network output is computed using an adaptive thresholding method for undegraded images. The network is trained to iteratively produce the same output even when the contrast is lowered and typical noise is added to the input. The network discovers the structure of the codes and uses it for binarization. This makes the recognition of degraded matrix codes possible for which adaptive thresholding fails.",
"Successful image reconstruction requires the recognition of a scene and the generation of a clean image of that scene. We propose to use recurrent neural networks for both analysis and synthesis. The networks have a hierarchical architecture that represents images in multiple scales with different degrees of abstraction. The mapping between these representations is mediated by a local connection structure. We supply the networks with degraded images and train them to reconstruct the originals iteratively. This iterative reconstruction makes it possible to use partial results as context information to resolve ambiguities. We demonstrate the power of the approach using three examples: superresolution, fill-in of occluded parts, and noise removal contrast enhancement. We also reconstruct images from sequences of degraded images."
]
} |
1606.06816 | 2468698467 | Mobile search has recently been shown to be the major contributor to the growing search market. The key difference between mobile search and desktop search is that information presentation is limited to the screen space of the mobile device. Thus, major search engines have adopted a new type of search result presentation, known as , in which each card presents summarized results from one domain vertical, for a given query, to augment the standard blue-links search results. While it has been widely acknowledged that information cards are particularly suited to mobile user experience, it is also challenging to optimize such result sets. Typically, user engagement metrics like query reformulation are based on whole ranked list of cards for each query and most traditional learning to rank algorithms require per-item relevance labels. In this paper, we investigate the possibility of interpreting query reformulation into effective relevance labels for query-card pairs. We inherit the concept of conventional learning-to-rank, and propose pointwise, pairwise and listwise interpretations for query reformulation. In addition, we propose a learning-to-label strategy that learns the contribution of each card, with respect to a query, where such contributions can be used as labels for training card ranking models. We utilize a state-of-the-art ranking model and demonstrate the effectiveness of proposed mechanisms on a large-scale mobile data from a major search engine, showing that models trained from labels derived from user engagement can significantly outperform ones trained from human judgment labels. | Information Cards : To our knowledge, there is no prior work on the problem of card ranking in mobile search. One of the latest work by Shokouhi and Guo @cite_16 was among the first to present the problem of serving cards to mobile users. However, their work and follow-up papers like @cite_4 addressed the problem under the search setting, i.e., generating card recommendations queries from the user. Our work, on the other hand, address the typical search problem in which the cards are ranked in response to the user's explicit queries. | {
"cite_N": [
"@cite_16",
"@cite_4"
],
"mid": [
"2055158566",
"2428526439"
],
"abstract": [
"The growing accessibility of mobile devices has substantially reformed the way users access information. While the reactive search by query remains as common as before, recent years have witnessed the emergence of various proactive systems such as Google Now and Microsoft Cortana. In these systems, relevant content is presented to users based on their context without a query. Interestingly, despite the increasing popularity of such services, there is very little known about how users interact with them. In this paper, we present the first study on user interactions with information cards. We demonstrate that the usage patterns of these cards vary depending on time and location. We also show that while overall different topics are clicked by users on proactive and reactive platforms, the topics of the clicked documents by the same user tend to be consistent cross-platform. Furthermore, we propose a supervised framework for re-ranking proactive cards based on the user's context and past history. To train our models, we use the viewport duration and clicks to infer pseudo-relevance labels for the cards. Our results suggest that the quality of card ranking can be significantly improved particularly when the user's reactive search history is leveraged and matched against the proactive data about the cards.",
"Proactive search systems like Google Now and Microsoft Cortana have gained increasing popularity with the growth of mobile Internet. Unlike traditional reactive search systems where search engines return results in response to queries issued by the users, proactive systems actively push information cards to the users on mobile devices based on the context around time, location, environment (e.g., weather), and user interests. A proactive system is a zero-query information retrieval system, which makes user modeling critical for understanding user information needs. In this paper, we study user modeling in proactive search systems and propose a learning to rank method for proactive ranking. We explore a variety of ways of modeling user interests, ranging from direct modeling of historical interaction with content types to finer-grained entity-level modeling, and user demographical information. To reduce the feature sparsity problem in entity modeling, we propose semantic similarity features using word embedding and an entity taxonomy in knowledge base. Experiments performed with data from a large commercial proactive search system show that our method significantly outperforms a strong baseline method deployed in the production system."
]
} |
1606.06816 | 2468698467 | Mobile search has recently been shown to be the major contributor to the growing search market. The key difference between mobile search and desktop search is that information presentation is limited to the screen space of the mobile device. Thus, major search engines have adopted a new type of search result presentation, known as , in which each card presents summarized results from one domain vertical, for a given query, to augment the standard blue-links search results. While it has been widely acknowledged that information cards are particularly suited to mobile user experience, it is also challenging to optimize such result sets. Typically, user engagement metrics like query reformulation are based on whole ranked list of cards for each query and most traditional learning to rank algorithms require per-item relevance labels. In this paper, we investigate the possibility of interpreting query reformulation into effective relevance labels for query-card pairs. We inherit the concept of conventional learning-to-rank, and propose pointwise, pairwise and listwise interpretations for query reformulation. In addition, we propose a learning-to-label strategy that learns the contribution of each card, with respect to a query, where such contributions can be used as labels for training card ranking models. We utilize a state-of-the-art ranking model and demonstrate the effectiveness of proposed mechanisms on a large-scale mobile data from a major search engine, showing that models trained from labels derived from user engagement can significantly outperform ones trained from human judgment labels. | Learning to Rank : This work is related to the field of learning-to-rank ( LtR ) but with significant difference. The main focus of LtR literature is to investigate ranking models for settings where relevance labels are available. Even in the so-called case (e.g., @cite_33 @cite_9 ) where ranking models are explicitly trained against a list of results for a particular query, the availability of relevance labels for each query-result pair is a prerequisite. Indeed, as mentioned above, the classic setting of LtR @cite_13 requires human relevance judgments and these labels are provided on the query-result level, which is different from our setting. For more thorough discussion about LtR , please refer to @cite_19 . | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_13",
"@cite_33"
],
"mid": [
"2149427297",
"2091158010",
"2784672094",
"2108862644"
],
"abstract": [
"Learning to rank for Information Retrieval (IR) is a task to automatically construct a ranking model using training data, such that the model can sort new objects according to their degrees of relevance, preference, or importance. Many IR problems are by nature ranking problems, and many IR technologies can be potentially enhanced by using learning-to-rank techniques. The objective of this tutorial is to give an introduction to this research direction. Specifically, the existing learning-to-rank algorithms are reviewed and categorized into three approaches: the pointwise, pairwise, and listwise approaches. The advantages and disadvantages with each approach are analyzed, and the relationships between the loss functions used in these approaches and IR evaluation measures are discussed. Then the empirical evaluations on typical learning-to-rank methods are shown, with the LETOR collection as a benchmark dataset, which seems to suggest that the listwise approach be the most effective one among all the approaches. After that, a statistical ranking theory is introduced, which can describe different learning-to-rank algorithms, and be used to analyze their query-level generalization abilities. At the end of the tutorial, we provide a summary and discuss potential future work on learning to rank.",
"This paper aims to conduct a study on the listwise approach to learning to rank. The listwise approach learns a ranking function by taking individual lists as instances and minimizing a loss function defined on the predicted list and the ground-truth list. Existing work on the approach mainly focused on the development of new algorithms; methods such as RankCosine and ListNet have been proposed and good performances by them have been observed. Unfortunately, the underlying theory was not sufficiently studied so far. To amend the problem, this paper proposes conducting theoretical analysis of learning to rank algorithms through investigations on the properties of the loss functions, including consistency, soundness, continuity, differentiability, convexity, and efficiency. A sufficient condition on consistency for ranking is given, which seems to be the first such result obtained in related research. The paper then conducts analysis on three loss functions: likelihood loss, cosine loss, and cross entropy loss. The latter two were used in RankCosine and ListNet. The use of the likelihood loss leads to the development of a new listwise method called ListMLE, whose loss function offers better properties, and also leads to better experimental results.",
"Learning to rank for information retrieval has gained a lot of interest in the recent years but there is a lack for large real-world datasets to benchmark algorithms. That led us to publicly release two datasets used internally at Yahoo! for learning the web search ranking function. To promote these datasets and foster the development of state-of-the-art learning to rank algorithms, we organized the Yahoo! Learning to Rank Challenge in spring 2010. This paper provides an overview and an analysis of this challenge, along with a detailed description of the released datasets.",
"The paper is concerned with learning to rank, which is to construct a model or a function for ranking objects. Learning to rank is useful for document retrieval, collaborative filtering, and many other applications. Several methods for learning to rank have been proposed, which take object pairs as 'instances' in learning. We refer to them as the pairwise approach in this paper. Although the pairwise approach offers advantages, it ignores the fact that ranking is a prediction task on list of objects. The paper postulates that learning to rank should adopt the listwise approach in which lists of objects are used as 'instances' in learning. The paper proposes a new probabilistic method for the approach. Specifically it introduces two probability models, respectively referred to as permutation probability and top k probability, to define a listwise loss function for learning. Neural Network and Gradient Descent are then employed as model and algorithm in the learning method. Experimental results on information retrieval show that the proposed listwise approach performs better than the pairwise approach."
]
} |
1606.06816 | 2468698467 | Mobile search has recently been shown to be the major contributor to the growing search market. The key difference between mobile search and desktop search is that information presentation is limited to the screen space of the mobile device. Thus, major search engines have adopted a new type of search result presentation, known as , in which each card presents summarized results from one domain vertical, for a given query, to augment the standard blue-links search results. While it has been widely acknowledged that information cards are particularly suited to mobile user experience, it is also challenging to optimize such result sets. Typically, user engagement metrics like query reformulation are based on whole ranked list of cards for each query and most traditional learning to rank algorithms require per-item relevance labels. In this paper, we investigate the possibility of interpreting query reformulation into effective relevance labels for query-card pairs. We inherit the concept of conventional learning-to-rank, and propose pointwise, pairwise and listwise interpretations for query reformulation. In addition, we propose a learning-to-label strategy that learns the contribution of each card, with respect to a query, where such contributions can be used as labels for training card ranking models. We utilize a state-of-the-art ranking model and demonstrate the effectiveness of proposed mechanisms on a large-scale mobile data from a major search engine, showing that models trained from labels derived from user engagement can significantly outperform ones trained from human judgment labels. | Note that, we are aware of research efforts to define more fine-grained, or to some extend, better user engagement metrics, other than query reformulations for search results, such as dwell time @cite_32 or more complex task-level satisfaction metrics @cite_30 . We use query reformulations in this paper as a reasonable starting point and leave extensions on deriving labels from other more advanced metrics for furture work. | {
"cite_N": [
"@cite_30",
"@cite_32"
],
"mid": [
"2189217339",
"2106576032"
],
"abstract": [
"Voice-activated intelligent assistants, such as Siri, Google Now, and Cortana, are prevalent on mobile devices. However, it is challenging to evaluate them due to the varied and evolving number of tasks supported, e.g., voice command, web search, and chat. Since each task may have its own procedure and a unique form of correct answers, it is expensive to evaluate each task individually. This paper is the first attempt to solve this challenge. We develop consistent and automatic approaches that can evaluate different tasks in voice-activated intelligent assistants. We use implicit feedback from users to predict whether users are satisfied with the intelligent assistant as well as its components, i.e., speech recognition and intent classification. Using this approach, we can potentially evaluate and compare different tasks within and across intelligent assistants ac-cording to the predicted user satisfaction rates. Our approach is characterized by an automatic scheme of categorizing user-system interaction into task-independent dialog actions, e.g., the user is commanding, selecting, or confirming an action. We use the action sequence in a session to predict user satisfaction and the quality of speech recognition and intent classification. We also incorporate other features to further improve our approach, including features derived from previous work on web search satisfaction prediction, and those utilizing acoustic characteristics of voice requests. We evaluate our approach using data collected from a user study. Results show our approach can accurately identify satisfactory and unsatisfactory sessions.",
"Clicks on search results are the most widely used behavioral signals for predicting search satisfaction. Even though clicks are correlated with satisfaction, they can also be noisy. Previous work has shown that clicks are affected by position bias, caption bias, and other factors. A popular heuristic for reducing this noise is to only consider clicks with long dwell time, usually equaling or exceeding 30 seconds. The rationale is that the more time a searcher spends on a page, the more likely they are to be satisfied with its contents. However, having a single threshold value assumes that users need a fixed amount of time to be satisfied with any result click, irrespective of the page chosen. In reality, clicked pages can differ significantly. Pages have different topics, readability levels, content lengths, etc. All of these factors may affect the amount of time spent by the user on the page. In this paper, we study the effect of different page characteristics on the time needed to achieve search satisfaction. We show that the topic of the page, its length and its readability level are critical in determining the amount of dwell time needed to predict whether any click is associated with satisfaction. We propose a method to model and provide a better understanding of click dwell time. We estimate click dwell time distributions for SAT (satisfied) or DSAT (dissatisfied) clicks for different click segments and use them to derive features to train a click-level satisfaction model. We compare the proposed model to baseline methods that use dwell time and other search performance predictors as features, and demonstrate that the proposed model achieves significant improvements."
]
} |
1606.07025 | 2471554257 | Attack graphs provide compact representations of the attack paths that an attacker can follow to compromise network resources by analysing network vulnerabilities and topology. These representations are a powerful tool for security risk assessment. Bayesian inference on attack graphs enables the estimation of the risk of compromise to the system's components given their vulnerabilities and interconnections, and accounts for multi-step attacks spreading through the system. Whilst static analysis considers the risk posture at rest, dynamic analysis also accounts for evidence of compromise, e.g. from SIEM software or forensic investigation. However, in this context, exact Bayesian inference techniques do not scale well. In this paper we show how Loopy Belief Propagation - an approximate inference technique - can be applied to attack graphs, and that it scales linearly in the number of nodes for both static and dynamic analysis, making such analyses viable for larger networks. We experiment with different topologies and network clustering on synthetic Bayesian attack graphs with thousands of nodes to show that the algorithm's accuracy is acceptable and converge to a stable solution. We compare sequential and parallel versions of Loopy Belief Propagation with exact inference techniques for both static and dynamic analysis, showing the advantages of approximate inference techniques to scale to larger attack graphs. | AG representations are built by analysing the interdependencies between the vulnerabilities and the security conditions identified in a network. Two representations are commonly encountered in the literature: representations @cite_24 @cite_39 @cite_10 @cite_44 @cite_41 depict the whole state of the network in each node in the graph whilst AGs @cite_43 @cite_2 @cite_1 are bipartite graphs representing the dependencies between vulnerabilities and security conditions. Although state-based AGs contain all the possible attack paths that can allow an attacker to reach a target security condition, they scale exponentially with the number of vulnerabilities and nodes in the network, which limits their application to very small networks. Relying on the monotonicity principle, logical AGs eliminate duplicate paths and provide a more compact representation that scales polynomially with the number of vulnerabilities @cite_43 @cite_2 . | {
"cite_N": [
"@cite_41",
"@cite_1",
"@cite_39",
"@cite_24",
"@cite_44",
"@cite_43",
"@cite_2",
"@cite_10"
],
"mid": [
"2117694832",
"2110908300",
"2083658929",
"2157554212",
"2150127671",
"2054127354",
"1971114459",
"2121805588"
],
"abstract": [
"This paper presents a tool for assessment of security attributes and vulnerabilities in computer networks. The tool generates attack graphs (Phillips and Swiler, 1998). Each node in the attack graph represents a possible attack state. Edges represent a change of state caused by a single action taken by the attacker or unwitting assistant, and are weighted by some metric (such as attacker effort or time to succeed). Generation of the attack graph requires algorithms that match information about attack requirements (specified in attack templates) to information about the network configuration and assumed attacker capabilities (attacker profile). The set of near-optimal shortest paths indicates the most exploitable components of the system configuration. This paper presents the status of the tool and discusses implementation issues, especially focusing on the data input needs and methods for eliminating redundant paths and nodes in the graph.",
"Attack graphs are important tools for analyzing security vulnerabilities in enterprise networks. Previous work on attack graphs has not provided an account of the scalability of the graph generating process, and there is often a lack of logical formalism in the representation of attack graphs, which results in the attack graph being difficult to use and understand by human beings. Pioneer work by Sheyner, et al is the first attack-graph tool based on formal logical techniques, namely model-checking. However, when applied to moderate-sized networks, Sheyner's tool encountered a significant exponential explosion problem. This paper describes a new approach to represent and generate attack graphs. We propose logical attack graphs, which directly illustrate logical dependencies among attack goals and configuration information. A logical attack graph always has size polynomial to the network being analyzed. Our attack graph generation tool builds upon MulVAL, a network security analyzer based on logical programming. We demonstrate how to produce a derivation trace in the MulVAL logic-programming engine, and how to use the trace to generate a logical attack graph in quadratic time. We show experimental evidence that our logical attack graph generation algorithm is very efficient. We have generated logical attack graphs for fully connected networks of 1000 machines using a Pentium 4 CPU with 1GB of RAM.",
"This paper presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The graph-based tool can identify the set of attack paths that have a high probability of success (or a low effort cost) for the attacker. The system could be used to test the effectiveness of making configuration changes, implementing an intrusion detection system, etc. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success.",
"An attack graph is a succinct representation of all paths through a system that end in a state where an intruder has successfully achieved his goal. Today Red Teams determine the vulnerability of networked systems by drawing gigantic attack graphs by hand. Constructing attack graphs by hand is tedious, error-prone, and impractical for large systems. By viewing an attack as a violation of a safety property, we can use off-the-shelf model checking technology to produce attack graphs automatically: a successful path from the intruder's viewpoint is a counterexample produced by the model checker In this paper we present an algorithm for generating attack graphs using model checking as a subroutine. Security analysts use attack graphs for detection, defense and forensics. In this paper we present a minimization analysis technique that allows analysts to decide which minimal set of security measures would guarantee the safety of the system. We provide a formal characterization of this problem: we prove that it is polynomially equivalent to the minimum hitting set problem and we present a greedy algorithm with provable bounds. We also present a reliability analysis technique that allows analysts to perform a simple cost-benefit trade-off depending on the likelihoods of attacks. By interpreting attack graphs as Markov Decision Processes we can use the value iteration algorithm to compute the probabilities of intruder success for each attack the graph.",
"Attack graphs depict ways in which an adversary exploits system vulnerabilities to achieve a desired state. System administrators use attack graphs to determine how vulnerable their systems are and to determine what security measures to deploy to defend their systems. In this paper, we present details of an example to illustrate how we specify and analyze network attack models. We take these models as input to our attack graph tools to generate attack graphs automatically and to analyze system vulnerabilities. While we have published our generation and analysis algorithms in earlier work, the presentation of our example and toolkit is novel to this paper.",
"Even well administered networks are vulnerable to attack. Recent work in network security has focused on the fact that combinations of exploits are the typical means by which an attacker breaks into a network. Researchers have proposed a variety of graph-based algorithms to generate attack trees (or graphs). Either structure represents all possible sequences of exploits, where any given exploit can take advantage of the penetration achieved by prior exploits in its chain, and the final exploit in the chain achieves the attacker's goal. The most recent approach in this line of work uses a modified version of the model checker NuSMV as a powerful inference engine for chaining together network exploits, compactly representing attack graphs, and identifying minimal sets of exploits. However, it is also well known that model checkers suffer from scalability problems, and there is good reason to doubt whether a model checker can handle directly a realistic set of exploits for even a modest-sized network. In this paper, we revisit the idea of attack graphs themselves, and argue that they represent more information explicitly than is necessary for the analyst. Instead, we propose a more compact and scalable representation. Although we show that it is possible to produce attack trees from our representation, we argue that more useful information can be produced, for larger networks, while bypassing the attack tree step. Our approach relies on an explicit assumption of monotonicity, which, in essence, states that the precondition of a given exploit is never invalidated by the successful application of another exploit. In other words, the attacker never needs to backtrack. The assumption reduces the complexity of the analysis problem from exponential to polynomial, thereby bringing even very large networks within reach of analysis",
"This talk will discuss issues and methods for survivability of systems under malicious attacks. To protect from such attacks, it is necessary to take steps to prevent attacks from succeeding. At the same time, it is important to recognize that not all attacks can be averted at the outset; attacks that are successful to some degree must be recognized as unavoidable and comprehensive support for identifying and responding to attacks is required.In my talk, I will describe the recent research on attack graphs that represent known attack sequences attackers can use to penetrate computer networks. I will show how attack graphs can be used to compute actual sets of hardening measures that guarantee the safety of given critical resources. Attack graphs can also be used to correlate received alerts, hypothesize missing alerts, and predict future alerts, all at the same time. Thus, they offer a promising solution for administrators to monitor and predict the progress of an intrusion, and take appropriate countermeasures in a timely manner.",
"An integral part of modeling the global view of network security is constructing attack graphs. Manual attack graph construction is tedious, error-prone, and impractical for attack graphs larger than a hundred nodes. In this paper we present an automated technique for generating and analyzing attack graphs. We base our technique on symbolic model checking algorithms, letting us construct attack graphs automatically and efficiently. We also describe two analyses to help decide which attacks would be most cost-effective to guard against. We implemented our technique in a tool suite and tested it on a small network example, which includes models of a firewall and an intrusion detection system."
]
} |
1606.07043 | 2463552202 | Many predictive tasks, such as diagnosing a patient based on their medical chart, are ultimately defined by the decisions of human experts. Unfortunately, encoding experts' knowledge is often time consuming and expensive. We propose a simple way to use fuzzy and informal knowledge from experts to guide discovery of interpretable latent topics in text. The underlying intuition of our approach is that latent factors should be informative about both correlations in the data and a set of relevance variables specified by an expert. Mathematically, this approach is a combination of the information bottleneck and Total Correlation Explanation (CorEx). We give a preliminary evaluation of Anchored CorEx, showing that it produces more coherent and interpretable topics on two distinct corpora. | first proposed in the context of topic modeling: words that are high precision indicators of underlying topics. In contrast to our approach, anchors are typically selected automatically, constrained to appear in only one topic, and used primarily to aid optimization @cite_9 . In our information theoretic framework, anchors are specified manually and more loosely defined as words having high mutual information with one or more latent factors. The effects of anchors on the interpretability of traditional topic models are often mixed @cite_4 , but our experiments suggest that our approach yields more coherent topics. | {
"cite_N": [
"@cite_9",
"@cite_4"
],
"mid": [
"2250317372",
"2954372847"
],
"abstract": [
"Spectral methods offer scalable alternatives to Markov chain Monte Carlo and expectation maximization. However, these new methods lack the rich priors associated with probabilistic models. We examine ’s anchor words algorithm for topic modeling and develop new, regularized algorithms that not only mathematically resemble Gaussian and Dirichlet priors but also improve the interpretability of topic models. Our new regularization approaches make these efficient algorithms more flexible; we also show that these methods can be combined with informed priors.",
"The anchor words algorithm performs provably efficient topic model inference by finding an approximate convex hull in a high-dimensional word co-occurrence space. However, the existing greedy algorithm often selects poor anchor words, reducing topic quality and interpretability. Rather than finding an approximate convex hull in a high-dimensional space, we propose to find an exact convex hull in a visualizable 2- or 3-dimensional space. Such low-dimensional embeddings both improve topics and clearly show users why the algorithm selects certain words."
]
} |
1606.07043 | 2463552202 | Many predictive tasks, such as diagnosing a patient based on their medical chart, are ultimately defined by the decisions of human experts. Unfortunately, encoding experts' knowledge is often time consuming and expensive. We propose a simple way to use fuzzy and informal knowledge from experts to guide discovery of interpretable latent topics in text. The underlying intuition of our approach is that latent factors should be informative about both correlations in the data and a set of relevance variables specified by an expert. Mathematically, this approach is a combination of the information bottleneck and Total Correlation Explanation (CorEx). We give a preliminary evaluation of Anchored CorEx, showing that it produces more coherent and interpretable topics on two distinct corpora. | In health informatics, anchor'' features chosen based on domain knowledge have been used to guide statistical learning . In , anchors are used as a source of distant supervision @cite_17 @cite_5 for classifiers in the absence of ground truth labels. While Anchored CorEx can be used for discriminative tasks, it is essentially unsupervised. Recent work by is perhaps most similar in spirit to ours: they exploit predefined anchors to help learn and impose semantics on a discrete latent factor model with a directed acyclic graph structure. We utilize an information theoretic approach that makes no generative modeling assumptions. | {
"cite_N": [
"@cite_5",
"@cite_17"
],
"mid": [
"2107598941",
"1954715867"
],
"abstract": [
"Modern models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora. We investigate an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACE-style algorithms, and allowing the use of corpora of any size. Our experiments use Freebase, a large semantic database of several thousand relations, to provide distant supervision. For each pair of entities that appears in some Freebase relation, we find all sentences containing those entities in a large unlabeled corpus and extract textual features to train a relation classifier. Our algorithm combines the advantages of supervised IE (combining 400,000 noisy pattern features in a probabilistic classifier) and unsupervised IE (extracting large numbers of relations from large corpora of any domain). Our model is able to extract 10,000 instances of 102 relations at a precision of 67.6 . We also analyze feature performance, showing that syntactic parse features are particularly helpful for relations that are ambiguous or lexically distant in their expression.",
"Recently, there has been much effort in making databases for Inolecular biology more accessible osld interoperable. However, information in text. form, such as MEDLINE records, remains a greatly underutilized source of biological information. We have begun a research effort aimed at automatically mapping information from text. sources into structured representations, such as knowledge bases. Our approach to this task is to use machine-learning methods to induce routines for extracting facts from text. We describe two learning methods that we have applied to this task -a statistical text classification method, and a relational learning method -and our initial experiments in learning such information-extraction routines. We also present an approach to decreasing the cost of learning information-extraction routines by learning from \"weakly\" labeled training data."
]
} |
1606.06140 | 2466860536 | At a time when information seekers first turn to digital sources for news and opinion, it is critical that we understand the role that social media plays in human behavior. This is especially true when information consumers also act as information producers and editors through their online activity. In order to better understand the effects that editorial ratings have on online human behavior, we report the results of a two large-scale in-vivo experiments in social media. We find that small, random rating manipulations on social media posts and comments created significant changes in downstream ratings resulting in significantly different final outcomes. We found positive herding effects for positive treatments on posts, increasing the final rating by 11.02 on average, but not for positive treatments on comments. Contrary to the results of related work, we found negative herding effects for negative treatments on posts and comments, decreasing the final ratings on average, of posts by 5.15 and of comments by 37.4 . Compared to the control group, the probability of reaching a high rating (>=2000) for posts is increased by 24.6 when posts receive the positive treatment and for comments is decreased by 46.6 when comments receive the negative treatment. | In a separate line of work, Sorensen used mistaken omissions of books from the bestsellers list to identify the boost in sales that accompany the perceived popularity of a book's appearance on the list @cite_2 . Similarly, when the download counters for different software labels were randomly increased, Hanson and Putler found that users are significantly more likely to download software that had the largest counter increase @cite_57 . Salganik and Watts performed a study to determine the extent to which perception of quality becomes a self-fulfilling prophecy.'' In their experiment they inverted the true popularity of songs in an online music marketplace, and found that the perceived-but-false-popularity became real over time @cite_19 . | {
"cite_N": [
"@cite_57",
"@cite_19",
"@cite_2"
],
"mid": [
"1984497613",
"2111910266",
"1745158165"
],
"abstract": [
"This article presents the results of a market experiment in which the perceived popularity of programs on a large commercial online system were manipulated. The download counts of software, which indicate how many previous users had obtained a software program, were artificially increased by repeated downloading. This manipulation was done to one member of a matched program pair, with the manipulation levels set randomly to achieve a balanced experimental design over a set of matched pairs. Subsequent downloads of the programs made by the service's subscribers were then recorded. Substantial additional downloads were induced by the manipulation. Some consumers apparently are drawn to market leaders, further increasing their lead. This raises an important issue regarding the need for verifiability of online popularity measures.",
"Individuals influence each others' decisions about cultural products such as songs, books, and movies; but to what extent can the perception of success become a “self-fulfilling prophecy”? We have explored this question experimentally by artificially inverting the true popularity of songs in an online “music market,” in which 12,207 participants listened to and downloaded songs by unknown bands. We found that most songs experienced self-ful- filling prophecies, in which perceived—but initially false—popularity became real over time. We also found, however, that the inversion was not self-fulfilling for the market as a whole, in part because the very best songs recovered their popularity in the long run. Moreover, the distortion of market information reduced the correlation between appeal and popularity, and led to fewer overall downloads. These results, although partial and speculative, suggest a new approach to the study of cultural markets, and indicate the potential of web-based experiments to explore ...",
"This paper uses detailed weekly data on sales of hardcover fiction books to evaluate the impact of the New York Times bestseller list on sales and product variety. In order to circumvent the obvious problem of simultaneity of sales and bestseller status, the analysis exploits time lags and accidental omissions in the construction of the list. The empirical results indicate that appearing on the list leads to a modest increase in sales for the average book, and that the effect is more dramatic for bestsellers by debut authors. The paper discusses how the additional concentration of demand on top-selling books could lead to a reduction in the privately optimal number of books to publish. However, the data suggest the opposite is true: the market expansion effect of bestseller lists appears to dominate any business stealing from non-bestselling titles."
]
} |
1606.06140 | 2466860536 | At a time when information seekers first turn to digital sources for news and opinion, it is critical that we understand the role that social media plays in human behavior. This is especially true when information consumers also act as information producers and editors through their online activity. In order to better understand the effects that editorial ratings have on online human behavior, we report the results of a two large-scale in-vivo experiments in social media. We find that small, random rating manipulations on social media posts and comments created significant changes in downstream ratings resulting in significantly different final outcomes. We found positive herding effects for positive treatments on posts, increasing the final rating by 11.02 on average, but not for positive treatments on comments. Contrary to the results of related work, we found negative herding effects for negative treatments on posts and comments, decreasing the final ratings on average, of posts by 5.15 and of comments by 37.4 . Compared to the control group, the probability of reaching a high rating (>=2000) for posts is increased by 24.6 when posts receive the positive treatment and for comments is decreased by 46.6 when comments receive the negative treatment. | On the practical side, one obvious case of online manipulation is spam, particularly a new type of spam called . Social spam is on the rise, with Nexgate Research reporting a tripling of social spam activity every six months @cite_29 and BusinessWeek magazine reporting that only 40 | {
"cite_N": [
"@cite_29"
],
"mid": [
"2128509431"
],
"abstract": [
"The popularity of social bookmarking sites has made them prime targets for spammers. Many of these systems require an administrator's time and energy to manually filter or remove spam. Here we discuss the motivations of social spam, and present a study of automatic detection of spammers in a social tagging system. We identify and analyze six distinct features that address various properties of social spam, finding that each of these features provides for a helpful signal to discriminate spammers from legitimate users. These features are then used in various machine learning algorithms for classification, achieving over 98 accuracy in detecting social spammers with 2 false positives. These promising results provide a new baseline for future efforts on social spam. We make our dataset publicly available to the research community."
]
} |
1606.06237 | 2472385082 | In this paper, we resolve many of the key algorithmic questions regarding robustness, memory efficiency, and differential privacy of tensor decomposition. We propose simple variants of the tensor power method which enjoy these strong properties. We present the first guarantees for online tensor power method which has a linear memory requirement. Moreover, we present a noise calibrated tensor power method with efficient privacy guarantees. At the heart of all these guarantees lies a careful perturbation analysis derived in this paper which improves up on the existing results significantly. | Stochastic gradient descent (SGD) is an intuitive approach for online tensor decomposition and has been successful in practical large-scale tensor decomposition problems @cite_12 . Despite its simplicity, theoretical properties are particularly hard to establish. @cite_26 considered a variant of the SGD objective and proved its correctness. However, the approach in @cite_26 only works for even-order tensors and its sample complexity dependency upon tensor dimension @math is poor. | {
"cite_N": [
"@cite_26",
"@cite_12"
],
"mid": [
"2964106499",
"2963396025"
],
"abstract": [
"We analyze stochastic gradient descent for optimizing non-convex functions. In many cases for non-convex functions the goal is to find a reasonable local minimum, and the main concern is that gradient updates are trapped in saddle points. In this paper we identify strict saddle property for non-convex problem that allows for efficient optimization. Using this property we show that from an arbitrary starting point, stochastic gradient descent converges to a local minimum in a polynomial number of iterations. To the best of our knowledge this is the first work that gives global convergence guarantees for stochastic gradient descent on non-convex functions with exponentially many local minima and saddle points. Our analysis can be applied to orthogonal tensor decomposition, which is widely used in learning a rich class of latent variable models. We propose a new optimization formulation for the tensor decomposition problem that has strict saddle property. As a result we get the first online algorithm for orthogonal tensor decomposition with global convergence guarantee.",
"We introduce an online tensor decomposition based approach for two latent variable modeling problems namely, (1) community detection, in which we learn the latent communities that the social actors in social networks belong to, and (2) topic modeling, in which we infer hidden topics of text articles. We consider decomposition of moment tensors using stochastic gradient descent. We conduct optimization of multilinear operations in SGD and avoid directly forming the tensors, to save computational and storage costs. We present optimized algorithm in two platforms. Our GPU-based implementation exploits the parallelism of SIMD architectures to allow for maximum speed-up by a careful optimization of storage and data transfer, whereas our CPU-based implementation uses efficient sparse matrix computations and is suitable for large sparse data sets. For the community detection problem, we demonstrate accuracy and computational efficiency on Facebook, Yelp and DBLP data sets, and for the topic modeling problem, we also demonstrate good performance on the New York Times data set. We compare our results to the state-of-the-art algorithms such as the variational method, and report a gain of accuracy and a gain of several orders of magnitude in the execution time."
]
} |
1606.06237 | 2472385082 | In this paper, we resolve many of the key algorithmic questions regarding robustness, memory efficiency, and differential privacy of tensor decomposition. We propose simple variants of the tensor power method which enjoy these strong properties. We present the first guarantees for online tensor power method which has a linear memory requirement. Moreover, we present a noise calibrated tensor power method with efficient privacy guarantees. At the heart of all these guarantees lies a careful perturbation analysis derived in this paper which improves up on the existing results significantly. | In the @cite_10 model a @math tensor @math is observed and one wishes to recover component @math under the presence of Gaussian random noise @math . @cite_10 shows that @math is sufficient to guarantee approximate recovery of @math and @cite_16 further improves the noise condition to @math via a 4th-order sum-of-squares relaxation. Techniques in both @cite_10 @cite_16 are rather complicated and could be difficult to adapt to memory or privacy constraints. Furthermore, in @cite_10 @cite_16 only one component is considered. On the other hand, @cite_4 shows that @math is sufficient for recovering multiple components from noisy tensors. However, @cite_4 assumes exact computation of rank-1 tensor approximation, which is NP-hard in general. | {
"cite_N": [
"@cite_16",
"@cite_10",
"@cite_4"
],
"mid": [
"1521197246",
"2164071215",
"2242384245"
],
"abstract": [
"We study a statistical model for the tensor principal component analysis problem introduced by Montanari and Richard: Given a order- @math tensor @math of the form @math , where @math is a signal-to-noise ratio, @math is a unit vector, and @math is a random noise tensor, the goal is to recover the planted vector @math . For the case that @math has iid standard Gaussian entries, we give an efficient algorithm to recover @math whenever @math , and certify that the recovered vector is close to a maximum likelihood estimator, all with high probability over the random choice of @math . The previous best algorithms with provable guarantees required @math . In the regime @math , natural tensor-unfolding-based spectral relaxations for the underlying optimization problem break down (in the sense that their integrality gap is large). To go beyond this barrier, we use convex relaxations based on the sum-of-squares method. Our recovery algorithm proceeds by rounding a degree- @math sum-of-squares relaxations of the maximum-likelihood-estimation problem for the statistical model. To complement our algorithmic results, we show that degree- @math sum-of-squares relaxations break down for @math , which demonstrates that improving our current guarantees (by more than logarithmic factors) would require new techniques or might even be intractable. Finally, we show how to exploit additional problem structure in order to solve our sum-of-squares relaxations, up to some approximation, very efficiently. Our fastest algorithm runs in nearly-linear time using shifted (matrix) power iteration and has similar guarantees as above. The analysis of this algorithm also confirms a variant of a conjecture of Montanari and Richard about singular vectors of tensor unfoldings.",
"We consider the Principal Component Analysis problem for large tensors of arbitrary order @math under a single-spike (or rank-one plus noise) model. On the one hand, we use information theory, and recent results in probability theory, to establish necessary and sufficient conditions under which the principal component can be estimated using unbounded computational resources. It turns out that this is possible as soon as the signal-to-noise ratio @math becomes larger than @math (and in particular @math can remain bounded as the problem dimensions increase). On the other hand, we analyze several polynomial-time estimation algorithms, based on tensor unfolding, power iteration and message passing ideas from graphical models. We show that, unless the signal-to-noise ratio diverges in the system dimensions, none of these approaches succeeds. This is possibly related to a fundamental limitation of computationally tractable estimators for this problem. We discuss various initializations for tensor power iteration, and show that a tractable initialization based on the spectrum of the matricized tensor outperforms significantly baseline methods, statistically and computationally. Finally, we consider the case in which additional side information is available about the unknown signal. We characterize the amount of side information that allows the iterative algorithms to converge to a good estimate.",
"Many idealized problems in signal processing, machine learning, and statistics can be reduced to the problem of finding the symmetric canonical decomposition of an underlying symmetric and orthogonally decomposable (SOD) tensor. Drawing inspiration from the matrix case, the successive rank-one approximation (SROA) scheme has been proposed and shown to yield this tensor decomposition exactly, and a plethora of numerical methods have thus been developed for the tensor rank-one approximation problem. In practice, however, the inevitable errors (say) from estimation, computation, and modeling necessitate that the input tensor can only be assumed to be a nearly SOD tensor---i.e., a symmetric tensor slightly perturbed from the underlying SOD tensor. This paper shows that even in the presence of perturbation, SROA can still robustly recover the symmetric canonical decomposition of the underlying tensor. It is shown that when the perturbation error is small enough, the approximation errors do not accumulate with ..."
]
} |
1606.06237 | 2472385082 | In this paper, we resolve many of the key algorithmic questions regarding robustness, memory efficiency, and differential privacy of tensor decomposition. We propose simple variants of the tensor power method which enjoy these strong properties. We present the first guarantees for online tensor power method which has a linear memory requirement. Moreover, we present a noise calibrated tensor power method with efficient privacy guarantees. At the heart of all these guarantees lies a careful perturbation analysis derived in this paper which improves up on the existing results significantly. | Our relaxed noise condition analysis for tensor power method is inspired by recent analysis of noisy matrix power methods @cite_8 @cite_18 . Unlike the matrix case, tensor decomposition no longer requires among eigenvalues and eigenvectors are usually recovered one at a time @cite_23 @cite_13 . This poses new challenges and requires non-trivial extensions of matrix power method analysis to the tensor case. | {
"cite_N": [
"@cite_18",
"@cite_13",
"@cite_23",
"@cite_8"
],
"mid": [
"2277079409",
"1856913691",
"2105724942",
"2952199998"
],
"abstract": [
"We consider the noisy power method algorithm, which has wide applications in machine learning and statistics, especially those related to principal component analysis (PCA) under resource (communication, memory or privacy) constraints. Existing analysis of the noisy power method shows an unsatisfactory dependency over the \"consecutive\" spectral gap @math of an input data matrix, which could be very small and hence limits the algorithm's applicability. In this paper, we present a new analysis of the noisy power method that achieves improved gap dependency for both sample complexity and noise tolerance bounds. More specifically, we improve the dependency over @math to dependency over @math , where @math is an intermediate algorithm parameter and could be much larger than the target rank @math . Our proofs are built upon a novel characterization of proximity between two subspaces that differ from canonical angle characterizations analyzed in previous works. Finally, we apply our improved bounds to distributed private PCA and memory-efficient streaming PCA and obtain bounds that are superior to existing results in the literature.",
"We provide guarantees for learning latent variable models emphasizing on the overcomplete regime, where the dimensionality of the latent space exceeds the observed dimensionality. In particular, we consider multiview mixtures, ICA, and sparse coding models. Our main tool is a new algorithm for tensor decomposition that works in the overcomplete regime. In the semi-supervised setting, we exploit label information to get a rough estimate of the model parameters, and then refine it using the tensor method on unlabeled samples. We establish learning guarantees when the number of components scales as k = o(d), where d is the observed dimension, and p is the order of the observed moment employed in the tensor method (usually p = 3, 4). In the unsupervised setting, a simple initialization algorithm based on SVD of the tensor slices is proposed, and the guarantees are provided under the stricter condition that k ≤ βd (where constant β can be larger than 1). For the learning applications, we provide tight sample complexity bounds through novel covering arguments.",
"This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models--including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation--which exploits a certain tensor structure in their low-order observable moments (typically, of second- and third-order). Specifically, parameter estimation is reduced to the problem of extracting a certain (orthogonal) decomposition of a symmetric tensor derived from the moments; this decomposition can be viewed as a natural generalization of the singular value decomposition for matrices. Although tensor decompositions are generally intractable to compute, the decomposition of these specially structured tensors can be efficiently obtained by a variety of approaches, including power iterations and maximization approaches (similar to the case of matrices). A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin's perturbation theorem for the singular vectors of matrices. This implies a robust and computationally tractable estimation approach for several popular latent variable models.",
"We provide a new robust convergence analysis of the well-known power method for computing the dominant singular vectors of a matrix that we call the noisy power method. Our result characterizes the convergence behavior of the algorithm when a significant amount noise is introduced after each matrix-vector multiplication. The noisy power method can be seen as a meta-algorithm that has recently found a number of important applications in a broad range of machine learning problems including alternating minimization for matrix completion, streaming principal component analysis (PCA), and privacy-preserving spectral analysis. Our general analysis subsumes several existing ad-hoc convergence bounds and resolves a number of open problems in multiple applications including streaming PCA and privacy-preserving singular vector computation."
]
} |
1606.06266 | 2464375869 | Recent advances in AI and robotics have claimed many incredible results with deep learning, yet no work to date has applied deep learning to the problem of liquid perception and reasoning. In this paper, we apply fully-convolutional deep neural networks to the tasks of detecting and tracking liquids. We evaluate three models: a single-frame network, multi-frame network, and a LSTM recurrent network. Our results show that the best liquid detection results are achieved when aggregating data over multiple frames, in contrast to standard image segmentation. They also show that the LSTM network outperforms the other two in both tasks. This suggests that LSTM-based neural networks have the potential to be a key component for enabling robots to handle liquids using robust, closed-loop controllers. | In order to perceive liquids at the pixel level, we make use of fully-convolutional neural networks (FCN). FCNs have been successfully applied to the task of image segmentation in the past @cite_7 @cite_4 @cite_1 and are a natural fit for pixel-wise classification. In addition to FCNs, we utilize long short-term memory (LSTM) @cite_11 recurrent cells to reason about the temporal evolution of liquids. LSTMs are preferable over more standard recurrent networks for long-term memory as they overcome many of the numerical issues during training such as exploding gradients @cite_5 . LSTM-based CNNs have been successfully applied to many temporal memory tasks by previous work @cite_10 @cite_1 , and in fact @cite_1 even combine LSTMs and FCNs by replacing the standard fully-connected layers of their LSTMs with @math convolution layers. We use a similar method in this paper. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_1",
"@cite_5",
"@cite_10",
"@cite_11"
],
"mid": [
"1884191083",
"2952632681",
"",
"1689711448",
"2118688707",
""
],
"abstract": [
"Abstract In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we’ve found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"",
"Several variants of the long short-term memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants. In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search, and their importance was assessed using the powerful functional ANalysis Of VAriance framework. In total, we summarize the results of 5400 experimental runs ( @math years of CPU time), which makes our study the largest of its kind on LSTM networks. Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components. We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment.",
"Motivated by vision-based reinforcement learning (RL) problems, in particular Atari games from the recent benchmark Aracade Learning Environment (ALE), we consider spatio-temporal prediction problems where future (image-)frames are dependent on control variables or actions as well as previous frames. While not composed of natural scenes, frames in Atari games are high-dimensional in size, can involve tens of objects with one or more objects being controlled by the actions directly and many other objects being influenced indirectly, can involve entry and departure of objects, and can involve deep partial observability. We propose and evaluate two deep neural network architectures that consist of encoding, action-conditional transformation, and decoding layers based on convolutional neural networks and recurrent neural networks. Experimental results show that the proposed architectures are able to generate visually-realistic frames that are also useful for control over approximately 100-step action-conditional futures in some games. To the best of our knowledge, this paper is the first to make and evaluate long-term predictions on high-dimensional video conditioned by control inputs.",
""
]
} |
1606.06357 | 2432356473 | In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks. | Pairwise interaction models were also considered to improve prediction performances. For example, the Universal Schema approach @cite_15 factorizes a 2D unfolding of the tensor (a matrix of entity pairs vs. relations) while extend this also to other pairs. | {
"cite_N": [
"@cite_15"
],
"mid": [
"1852412531"
],
"abstract": [
"© 2013 Association for Computational Linguistics. Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a universal schema: the union of all involved schemas (surface form predicates as in OpenIE, and relations in the schemas of preexisting databases). This schema has an almost unlimited set of relations (due to surface forms), and supports integration with existing structured data (through the relation types of existing databases). To populate a database of such schema we present matrix factorization models that learn latent feature vectors for entity tuples and relations. We show that such latent models achieve substantially higher accuracy than a traditional classification approach. More importantly, by operating simultaneously on relations observed in text and in pre-existing structured DBs such as Freebase, we are able to reason about unstructured and structured data in mutually-supporting ways. By doing so our approach outperforms stateof- the-Art distant supervision."
]
} |
1606.06357 | 2432356473 | In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks. | The original multi-linear DistMult model is symmetric in subject and object for every relation @cite_1 and achieves good performance, presumably due to its simplicity. The TransE model from also embeds entities and relations in the same space and imposes a geometrical structural bias into the model: the subject entity vector should be close to the object entity vector once translated by the relation vector. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1533230146"
],
"abstract": [
"Abstract: We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning."
]
} |
1606.06357 | 2432356473 | In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks. | A recent novel way to handle antisymmetry is via the Holographic Embeddings (HolE) model by @cite_3 . In HolE the circular correlation is used for combining entity embeddings, measuring the covariance between embeddings at different dimension shifts. This generally suggests that other composition functions than the classical tensor product can be helpful as they allow for a richer interaction of embeddings. However, the asymmetry in the composition function in HolE stems from the asymmetry of circular correlation, an @math operation, whereas ours is inherited from the complex inner product, in @math . | {
"cite_N": [
"@cite_3"
],
"mid": [
"2145544171"
],
"abstract": [
"Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HOLE) to learn compositional vector space representations of entire knowledge graphs. The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator, HOLE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets. Experimentally, we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction on knowledge graphs and relational learning benchmark datasets."
]
} |
1606.06279 | 2469717596 | An intriguing open question is whether measurements derived from Big Data recording human activities can yield high-fidelity proxies of socio-economic development and well-being. Can we monitor and predict the socio-economic development of a territory just by observing the behavior of its inhabitants through the lens of Big Data? In this paper, we design a data-driven analytical framework that uses mobility measures and social measures extracted from mobile phone data to estimate indicators for socio-economic development and well-being. We discover that the diversity of mobility, defined in terms of entropy of the individual users’ trajectories, exhibits (i) significant correlation with two different socio-economic indicators and (ii) the highest importance in predictive models built to predict the socio-economic indicators. Our analytical framework opens an interesting perspective to study human behavior through the lens of Big Data by means of new statistical indicators that quantify and possibly “nowcast” the well-being and the socio-economic development of a territory. | The interest around the analysis of Big Data and the possibility to compile them into a comprehensive picture of human behavior have infected all branches of human knowledge, from sports @cite_25 to economy @cite_6 . However, two aspects in particular attracted the interest of scientists in the last decade, due the striking abundance of data in those contexts: human mobility and social networks. | {
"cite_N": [
"@cite_25",
"@cite_6"
],
"mid": [
"2187648190",
"2137734816"
],
"abstract": [
"Sports analytics in general, and football (soccer in USA) analytics in particular, have evolved in recent years in an amazing way, thanks to automated or semi-automated sensing technologies that provide high-fidelity data streams extracted from every game. In this paper we propose a data-driven approach and show that there is a large potential to boost the understanding of football team performance. From observational data of football games we extract a set of pass-based performance indicators and summarize them in the H indicator. We observe a strong correlation among the proposed indicator and the success of a team, and therefore perform a simulation on the four major European championships (78 teams, almost 1500 games). The outcome of each game in the championship was replaced by a synthetic outcome (win, loss or draw) based on the performance indicators computed for each team. We found that the final rankings in the simulated championships are very close to the actual rankings in the real championships, and show that teams with high ranking error show extreme values of a defense attack efficiency measure, the Pezzali score. Our results are surprising given the simplicity of the proposed indicators, suggesting that a complex systems' view on football data has the potential of revealing hidden patterns and behavior of superior quality.",
"Aim of this paper is to introduce the complex system perspective into retail market analysis. Currently, to understand the retail market means to search for local patterns at the micro level, involving the segmentation, separation and profiling of diverse groups of consumers. In other contexts, however, markets are modelled as complex systems. Such strategy is able to uncover emerging regularities and patterns that make markets more predictable, e.g. enabling to predict how much a country’s GDP will grow. Rather than isolate actors in homogeneous groups, this strategy requires to consider the system as a whole, as the emerging pattern can be detected only as a result of the interaction between its self-organizing parts. This assumption holds also in the retail market: each customer can be seen as an independent unit maximizing its own utility function. As a consequence, the global behaviour of the retail market naturally emerges, enabling a novel description of its properties, complementary to the local pattern approach. Such task demands for a data-driven empirical framework. In this paper, we analyse a unique transaction database, recording the micro-purchases of a million customers observed for several years in the stores of a national supermarket chain. We show the emergence of the fundamental pattern of this complex system, connecting the products’ volumes of sales with the customers’ volumes of purchases. This pattern has a number of applications. We provide three of them. By enabling us to evaluate the sophistication of needs that a customer has and a product satisfies, this pattern has been applied to the task of uncovering the hierarchy of needs of the customers, providing a hint about what is the next product a customer could be interested in buying and predicting in which shop she is likely to go to buy it."
]
} |
1606.06279 | 2469717596 | An intriguing open question is whether measurements derived from Big Data recording human activities can yield high-fidelity proxies of socio-economic development and well-being. Can we monitor and predict the socio-economic development of a territory just by observing the behavior of its inhabitants through the lens of Big Data? In this paper, we design a data-driven analytical framework that uses mobility measures and social measures extracted from mobile phone data to estimate indicators for socio-economic development and well-being. We discover that the diversity of mobility, defined in terms of entropy of the individual users’ trajectories, exhibits (i) significant correlation with two different socio-economic indicators and (ii) the highest importance in predictive models built to predict the socio-economic indicators. Our analytical framework opens an interesting perspective to study human behavior through the lens of Big Data by means of new statistical indicators that quantify and possibly “nowcast” the well-being and the socio-economic development of a territory. | Despite an increasing interest around this field, a view on the state-of-the-art cannot avoid to notice that there is no a unified methodology to exploit Big Data for official statistics. It is also surprising that widely accepted measures of human mobility (e.g. radius of gyration @cite_45 and mobility entropy @cite_1 ) have not been used so far. We overcome these issues by providing an analytical framework as support for official statistics, which allows for a systematic evaluation of the relations between relevant aspects of human behavior and the development of a territory. Moreover, our paper shows how standard mobility measures, not exploited so far, are powerful tools for official statistics purposes. | {
"cite_N": [
"@cite_45",
"@cite_1"
],
"mid": [
"1982300822",
"1987228002"
],
"abstract": [
"This study used a sample of 100,000 mobile phone users whose trajectory was tracked for six months to study human mobility patterns. Displacements across all users suggest behaviour close to the Levy-flight-like pattern observed previously based on the motion of marked dollar bills, but with a cutoff in the distribution. The origin of the Levy patterns observed in the aggregate data appears to be population heterogeneity and not Levy patterns at the level of the individual.",
"A range of applications, from predicting the spread of human and electronic viruses to city planning and resource management in mobile communications, depend on our ability to foresee the whereabouts and mobility of individuals, raising a fundamental question: To what degree is human behavior predictable? Here we explore the limits of predictability in human dynamics by studying the mobility patterns of anonymized mobile phone users. By measuring the entropy of each individual’s trajectory, we find a 93 potential predictability in user mobility across the whole user base. Despite the significant differences in the travel patterns, we find a remarkable lack of variability in predictability, which is largely independent of the distance users cover on a regular basis."
]
} |
1606.06437 | 2469686004 | This paper introduces a fast and efficient segmentation technique for 2D images and 3D point clouds of building facades. Facades of buildings are highly structured and consequently most methods that have been proposed for this problem aim to make use of this strong prior information. Contrary to most prior work, we are describing a system that is almost domain independent and consists of standard segmentation methods. We train a sequence of boosted decision trees using auto-context features. This is learned using stacked generalization. We find that this technique performs better, or comparable with all previous published methods and present empirical results on all available 2D and 3D facade benchmark datasets. The proposed method is simple to implement, easy to extend, and very efficient at test-time inference. | Recently, techniques have been introduced for facade understanding and modeling in 3D @cite_13 @cite_11 . The 3D point cloud or meshes that these methods operate on are constructed using 2D images captured from multiple viewpoints. A standard way to label a 3D mesh or a point cloud, is to label all the overlapping images used for reconstructing the 3D model and then fuse the 2D predictions to obtain a consistently labeled 3D model @cite_18 @cite_31 . The work of @cite_13 proposed a fast technique to segment 3D facade meshes by exploiting the geometry of the reconstructed 3D model. To label a mesh face, their approach selects a single 2D image (from the set of images used for reconstruction) that best captures the semantics. The speed of this technique comes at the cost of performance. The method of @cite_11 implements a three-stage approach to label point clouds of facades directly in 3D. First, features on 3D points are computed and are classified into various semantic classes. Next, facades belonging to different buildings are separated based on previously obtained semantics. Finally, weak architectural rules are applied to enforce structural priors, leading to marginal improvements in performance (0.78 predictions. | {
"cite_N": [
"@cite_31",
"@cite_18",
"@cite_13",
"@cite_11"
],
"mid": [
"1542723449",
"1610707153",
"",
"1903208982"
],
"abstract": [
"This paper presents a simple and effective nonparametric approach to the problem of image parsing, or labeling image regions (in our case, superpixels produced by bottom-up segmentation) with their categories. This approach requires no training, and it can easily scale to datasets with tens of thousands of images and hundreds of labels. It works by scene-level matching with global image descriptors, followed by superpixel-level matching with local features and efficient Markov random field (MRF) optimization for incorporating neighborhood context. Our MRF setup can also compute a simultaneous labeling of image regions into semantic classes (e.g., tree, building, car) and geometric classes (sky, vertical, ground). Our system outperforms the state-of-the-art non-parametric method based on SIFT Flow on a dataset of 2,688 images and 33 labels. In addition, we report per-pixel rates on a larger dataset of 15,150 images and 170 labels. To our knowledge, this is the first complete evaluation of image parsing on a dataset of this size, and it establishes a new benchmark for the problem.",
"Computer vision algorithms for individual tasks such as object recognition, detection and segmentation have shown impressive results in the recent past. The next challenge is to integrate all these algorithms and address the problem of scene understanding. This paper is a step towards this goal. We present a probabilistic framework for reasoning about regions, objects, and their attributes such as object class, location, and spatial extent. Our model is a Conditional Random Field defined on pixels, segments and objects. We define a global energy function for the model, which combines results from sliding window detectors, and low-level pixel-based unary and pairwise relations. One of our primary contributions is to show that this energy function can be solved efficiently. Experimental results show that our model achieves significant improvement over the baseline methods on CamVid and PASCAL VOC datasets.",
"",
"We propose a new approach for semantic segmentation of 3D city models. Starting from an SfM reconstruction of a street-side scene, we perform classification and facade splitting purely in 3D, obviating the need for slow image-based semantic segmentation methods. We show that a properly trained pure-3D approach produces high quality labelings, with significant speed benefits (20x faster) allowing us to analyze entire streets in a matter of minutes. Additionally, if speed is not of the essence, the 3D labeling can be combined with the results of a state-of-the-art 2D classifier, further boosting the performance. Further, we propose a novel facade separation based on semantic nuances between facades. Finally, inspired by the use of architectural principles for 2D facade labeling, we propose new 3D-specific principles and an efficient optimization scheme based on an integer quadratic programming formulation."
]
} |
1606.06488 | 2461268924 | To produce cartographic maps, simplification is typically used to reduce complexity of the map to a legible level. With schematic maps, however, this simplification is pushed far beyond the legibility threshold and is instead constrained by functional need and resemblance. Moreover, stylistic geometry is often used to convey the schematic nature of the map. In this paper we explore discretized approaches to computing a schematic shape @math for a simple polygon @math . We do so by overlaying a plane graph @math on @math as the solution space for the schematic shape. Topological constraints imply that @math should describe a simple polygon. We investigate two approaches, simple map matching and connected face selection, based on commonly used similarity metrics. With the former, @math is a simple cycle @math in @math and we quantify resemblance via the Frechet distance. We prove that it is NP-hard to compute a cycle that approximates the minimal Frechet distance over all simple cycles in a plane graph @math . This result holds even if @math is a partial grid graph, if area preservation is required and if we assume a given sequence of turns is specified. With the latter, @math is a connected face set in @math , quantifying resemblance via the symmetric difference. Though the symmetric difference seems a less strict measure, we prove that it is NP-hard to compute the optimal face set. This result holds even if @math is full grid graph or a triangular or hexagonal tiling, and if area preservation is required. Moreover, it is independent of whether we allow the set of faces to have holes or not. | Schematization Line and network schematization (e.g. transit maps) have received significant attention in the algorithmic literature, e.g. @cite_19 @cite_11 @cite_16 @cite_20 . Recently, schematization of geographic regions has gained increasing attention, e.g. @cite_14 @cite_28 @cite_8 . Our discretized approach is similar in nature to the octilinear schematization technique of Cicerone and Cermignani @cite_5 , though simplicity is of no concern in their work. As mentioned in the introduction, the discretized approach offers conceptual advantages over the existing nondiscretized methods. | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_28",
"@cite_19",
"@cite_5",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"2339610053",
"2147143017",
"2784031",
"",
"192961874",
"2103359219",
"2124504009",
"2031367758"
],
"abstract": [
"In this article, we study automated simplification and schematization of territorial outlines. We present a quadratic-time simplification algorithm based on an operation called edge-move. We prove that the number of edges of any nonconvex simple polygon can be reduced with this operation. Moreover, edge-moves preserve area and topology and do not introduce new orientations. The latter property in particular makes the algorithm highly suitable for schematization in which all resulting lines are required to be parallel to one of a given set of lines (orientations). To obtain such a result, we need only to preprocess the input to use only lines that are parallel to one of the given set. We present an algorithm to enforce such orientation restrictions, again without changing area or topology. Experiments show that our algorithms obtain results of high visual quality.",
"Hand-drawn schematized maps traditionally make extensive use of curves. However, there are few automated approaches for curved schematization; most previous work focuses on straight lines. We present a new algorithm for area-preserving curved schematization of territorial outlines. Our algorithm converts a simple polygon into a schematic crossing-free representation using circular arcs. We use two basic operations to iteratively replace consecutive arcs until the desired complexity is reached. Our results are not restricted to arcs ending at input vertices. The method can be steered towards different degrees of “curviness”: we can encourage or discourage the use of arcs with a large central angle via a single parameter. Our method creates visually pleasing results even for very low output complexities. To evaluate the effectiveness of our design choices, we present a geometric evaluation of the resulting schematizations. Besides the geometric qualities of our algorithm, we also investigate the potential of curved schematization as a concept. We conducted an online user study investigating the effectiveness of curved schematizations compared to straight-line schematizations. While the visual complexity of curved shapes was judged higher than that of straight-line shapes, users generally preferred curved schematizations. We observed that curves significantly improved the ability of users to match schematized shapes of moderate complexity to their unschematized equivalents.",
"We present an algorithm to compute schematic maps with circular arcs. Our algorithm iteratively replaces two consecutive arcs with a single arc to reduce the complexity of the output map and thus to increase its level of abstraction. Our main contribution is a method for replacing arcs that meet at high-degree vertices. This allows us to greatly reduce the output complexity, even for dense networks. We experimentally evaluate the effectiveness of our algorithm in three scenarios: territorial outlines, road networks, and metro maps. For the latter, we combine our approach with an algorithm to more evenly distribute stations. Our experiments show that our algorithm produces high-quality results for territorial outlines and metro maps. However, the lack of caricature (exaggeration of typical features) makes it less useful for road networks.",
"",
"The schematization of simple polygons into octilinear polygons is a computational problem arising in the context of electronic CAD systems. In particular, it appears when the generation and propagation of electromagnetic noise into real-world multi-layer PCBs has to be detected. In this paper we provide a fast and simple heuristic for solving such a problem. Experimental evaluations over real-world data show the efficiency and the good performance of the proposed approach.",
"The automatic layout of metro maps has been investigated quite intensely over the last few years. Previous work has focused on the octilinear drawing style where edges are drawn horizontally, vertically, or diagonally at 45°. Inspired by manually created curvy metro maps, we advocate the use of the curvilinear drawing style; we draw edges as Bezier curves. Since we forbid metro lines to bend (even in stations), the user of such a map can trace the metro lines easily. In order to create such drawings, we use the force-directed framework. Our method is the first that directly represents and operates on edges as curves.",
"Metro maps are schematic diagrams of public transport networks that serve as visual aids for route planning and navigation tasks. It is a challenging problem in network visualization to automatically draw appealing metro maps. There are two aspects to this problem that depend on each other: the layout problem of finding station and link coordinates and the labeling problem of placing nonoverlapping station labels. In this paper, we present a new integral approach that solves the combined layout and labeling problem (each of which, independently, is known to be NP-hard) using mixed-integer programming (MIP). We identify seven design rules used in most real-world metro maps. We split these rules into hard and soft constraints and translate them into an MIP model. Our MIP formulation finds a metro map that satisfies all hard constraints (if such a drawing exists) and minimizes a weighted sum of costs that correspond to the soft constraints. We have implemented the MIP model and present a case study and the results of an expert assessment to evaluate the performance of our approach in comparison to both manually designed official maps and results of previous layout methods.",
"Motivated by drawing route sketches, we consider the d-regular path schematization problem. For this problem we are given an embedded path P (e.g., a route in a road network) and a positive integer d. The goal is to find a d-schematized embedding of P in which the orthogonal order of all vertices in the input is preserved and in which every edge has a direction that is an integer multiple of (90 d)^o. We show that deciding whether a path can be d-schematized is NP-complete for any positive integer d. Despite the NP-hardness of the problem we still want to be able to generate route sketches and thus need to solve the d-regular path schematization problem. We explore two different algorithmic approaches, both of which consider two additional quality constraints: We require that every edge is drawn with a user-specified minimum length and we want to maximize the number of edges that are drawn with their preferred direction. The first algorithmic approach restricts the input paths to be axis-monotone. We show that there exists a polynomial-time algorithm that solves the d-regular path schematization problem for this restricted class of embedded paths. We extend this approach by a heuristic such that it can handle arbitrary simple paths. However, for the second step we cannot guarantee that the orthogonal order of the input embedding is maintained. The second approach is a formulation of the d-regular path schematization problem as a mixed integer linear program. Finally, we give an experimental evaluation which shows that both approaches generate reasonable route sketches for real-world data."
]
} |
1606.06108 | 2475269242 | Visual question answering (VQA) task not only bridges the gap between images and language, but also requires that specific contents within the image are understood as indicated by linguistic context of the question, in order to generate the accurate answers. Thus, it is critical to build an efficient embedding of images and texts. We implement DualNet, which fully takes advantage of discriminative power of both image and textual features by separately performing two operations. Building an ensemble of DualNet further boosts the performance. Contrary to common belief, our method proved effective in both real images and abstract scenes, in spite of significantly different properties of respective domain. Our method was able to outperform previous state-of-the-art methods in real images category even without explicitly employing attention mechanism, and also outperformed our own state-of-the-art method in abstract scenes category, which recently won the first place in VQA Challenge 2016. | Real images category is currently by far the more popular and competitive task in VQA. @cite_7 introduced Ask Your Neurons. Unlike the baseline provided by @cite_1 , in which image features and question features are embedded to common space at the last stage prior to classification, they built a system where image features are shared at each LSTM unit for processing question features. They also performed comparison of different operations for fusing input features, and concluded that summation performs better than multiplication. In our work, however, both summation and multiplication are performed, which demonstrates significant improvements. | {
"cite_N": [
"@cite_1",
"@cite_7"
],
"mid": [
"2950761309",
"2391839782"
],
"abstract": [
"We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).",
"We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Ask Your Neurons, a scalable, jointly trained, end-to-end formulation to this problem. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language inputs (image and question). We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extend the original DAQUAR dataset to DAQUAR-Consensus. Moreover, we also extend our analysis to VQA, a large-scale question answering about images dataset, where we investigate some particular design choices and show the importance of stronger visual models. At the same time, we achieve strong performance of our model that still uses a global image representation. Finally, based on such analysis, we refine our Ask Your Neurons on DAQUAR, which also leads to a better performance on this challenging task."
]
} |
1606.06108 | 2475269242 | Visual question answering (VQA) task not only bridges the gap between images and language, but also requires that specific contents within the image are understood as indicated by linguistic context of the question, in order to generate the accurate answers. Thus, it is critical to build an efficient embedding of images and texts. We implement DualNet, which fully takes advantage of discriminative power of both image and textual features by separately performing two operations. Building an ensemble of DualNet further boosts the performance. Contrary to common belief, our method proved effective in both real images and abstract scenes, in spite of significantly different properties of respective domain. Our method was able to outperform previous state-of-the-art methods in real images category even without explicitly employing attention mechanism, and also outperformed our own state-of-the-art method in abstract scenes category, which recently won the first place in VQA Challenge 2016. | Many recent papers reporting competitive results have relied heavily on various types of attention mechanism. @cite_11 introduced stacked attention networks (SANs), which relies on semantic representation of each question to search for relavant regions in the image. More specifically, they built multiple-layer attention mechanism, which locates the relevant region multiple times so that more accurate region of interest can be retrieved. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2171810632"
],
"abstract": [
"This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer."
]
} |
1606.06108 | 2475269242 | Visual question answering (VQA) task not only bridges the gap between images and language, but also requires that specific contents within the image are understood as indicated by linguistic context of the question, in order to generate the accurate answers. Thus, it is critical to build an efficient embedding of images and texts. We implement DualNet, which fully takes advantage of discriminative power of both image and textual features by separately performing two operations. Building an ensemble of DualNet further boosts the performance. Contrary to common belief, our method proved effective in both real images and abstract scenes, in spite of significantly different properties of respective domain. Our method was able to outperform previous state-of-the-art methods in real images category even without explicitly employing attention mechanism, and also outperformed our own state-of-the-art method in abstract scenes category, which recently won the first place in VQA Challenge 2016. | In a similar manner, @cite_14 attempts to locate relevant regions in the image. They map the textual queries to features from different regions by embedding them to a common space and comparing their relevance via inner product. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2179022885"
],
"abstract": [
"We present a method that learns to answer visual questions by selecting image regions relevant to the text-based query. Our method exhibits significant improvements in answering questions such as \"what color,\" where it is necessary to evaluate a specific location, and \"what room,\" where it selectively identifies informative image regions. Our model is tested on the VQA dataset which is the largest human-annotated visual question answering dataset to our knowledge."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.