aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1510.08012 | 1836794077 | Structure-from-motion (SfM) largely relies on feature tracking. In image sequences, if disjointed tracks caused by objects moving in and out of the field of view, occasional occlusion, or image noise are not handled well, corresponding SfM could be affected. This problem becomes severer for large-scale scenes, which typically requires to capture multiple sequences to cover the whole scene. In this paper, we propose an efficient non-consecutive feature tracking framework to match interrupted tracks distributed in different subsequences or even in different videos. Our framework consists of steps of solving the feature “dropout” problem when indistinctive structures, noise or large image distortion exists, and of rapidly recognizing and joining common features located in different subsequences. In addition, we contribute an effective segment-based coarse-to-fine SfM algorithm for robustly handling large data sets. Experimental results on challenging video data demonstrate the effectiveness of the proposed system. | State-of-the-art large-scale SfM methods can handle millions of images on a single PC in one day @cite_59 . To this end, large image data are separated into a number of independent submaps, each is optimized independently. Steedly al @cite_10 proposed a partitioning approach to decompose a large-scale optimization into multiple better-conditioned subproblems. Clemente al @cite_51 proposed building local maps independently and stitching them with a hierarchical approach. | {
"cite_N": [
"@cite_51",
"@cite_10",
"@cite_59"
],
"mid": [
"120566300",
"1974260711",
"2099443716"
],
"abstract": [
"This paper presents a method for Simultaneous Localization and Mapping (SLAM) relying on a monocular camera as the only sensor which is able to build outdoor, closedloop maps much larger than previously achieved with such input. Our system, based on the Hierarchical Map approach [1], builds independent local maps in real-time using the EKF-SLAM technique and the inverse depth representation proposed in [2]. The main novelty in the local mapping process is the use of a data association technique that greatly improves its robustness in dynamic and complex environments. A new visual map matching algorithm stitches these maps together and is able to detect large loops automatically, taking into account the unobservability of scale intrinsic to pure monocular SLAM. The loop closing constraint is applied at the upper level of the Hierarchical Map in near real-time. We present experimental results demonstrating monocular SLAM as a human carries a camera over long walked trajectories in outdoor areas with people and other clutter, even in the more difficult case of forward-looking camera, and show the closing of loops of several hundred meters.",
"We propose a spectral partitioning approach for large-scale optimization problems, specifically structure from motion. In structure from motion, partitioning methods reduce the problem into smaller and better conditioned subproblems which can be efficiently optimized. Our partitioning method uses only the Hessian of the reprojection error and its eigenvector. We show that partitioned systems that preserve the eigenvectors corresponding to small eigenvalues result in lower residual error when optimized. We create partitions by clustering the entries of the eigenvectors of the Hessian corresponding to small eigenvalues. This is a more general technique than relying on domain knowledge and heuristics such as bottom-up structure from motion approaches. Simultaneously, it takes advantage of more information than generic matrix partitioning algorithms.",
"This paper introduces an approach for dense 3D reconstruction from unregistered Internet-scale photo collections with about 3 million images within the span of a day on a single PC (\"cloudless\"). Our method advances image clustering, stereo, stereo fusion and structure from motion to achieve high computational performance. We leverage geometric and appearance constraints to obtain a highly parallel implementation on modern graphics processors and multi-core architectures. This leads to two orders of magnitude higher performance on an order of magnitude larger dataset than competing state-of-the-art approaches."
]
} |
1510.08012 | 1836794077 | Structure-from-motion (SfM) largely relies on feature tracking. In image sequences, if disjointed tracks caused by objects moving in and out of the field of view, occasional occlusion, or image noise are not handled well, corresponding SfM could be affected. This problem becomes severer for large-scale scenes, which typically requires to capture multiple sequences to cover the whole scene. In this paper, we propose an efficient non-consecutive feature tracking framework to match interrupted tracks distributed in different subsequences or even in different videos. Our framework consists of steps of solving the feature “dropout” problem when indistinctive structures, noise or large image distortion exists, and of rapidly recognizing and joining common features located in different subsequences. In addition, we contribute an effective segment-based coarse-to-fine SfM algorithm for robustly handling large data sets. Experimental results on challenging video data demonstrate the effectiveness of the proposed system. | @cite_11 proposed an out-of-core bundle adjustment (BA) for large-scale SfM. This method decomposes the data into multiple submaps, each of which has its own local coordinate system for optimization in parallel. For global optimization, an out-of-core implementation is adopted. Snavely al @cite_19 proposed speeding up reconstruction by selecting a skelet al image set for SfM and then adding other images with pose estimation. Similarly, Konolige and Agrawal @cite_43 selected a skelet al frame set and used reduced relative constraints for closing large loops. Each skeleton frame can actually be considered as a submap. A similar scheme is applied to iconic views @cite_52 , which are generated by clustering images with similar gist features @cite_5 . In our work, a segment-based scheme is adopted, which first estimates SfM for each sequence independently, and then aligns the recovered submaps. Depending on estimation errors, we split each sequence to multiple segments, and perform segment-based refinement. This strategy can effectively handle large data and quickly reduce estimation errors during optimization. | {
"cite_N": [
"@cite_52",
"@cite_19",
"@cite_43",
"@cite_5",
"@cite_11"
],
"mid": [
"1597120896",
"",
"2121013842",
"1566135517",
"2143116602"
],
"abstract": [
"This paper presents an approach for modeling landmark sites such as the Statue of Liberty based on large-scale contaminated image collections gathered from the Internet. Our system combines 2D appearance and 3D geometric constraints to efficiently extract scene summaries, build 3D models, and recognize instances of the landmark in new test images. We start by clustering images using low-dimensional global \"gist\" descriptors. Next, we perform geometric verification to retain only the clusters whose images share a common 3D structure. Each valid cluster is then represented by a single iconic view, and geometric relationships between iconic views are captured by an iconic scene graph. In addition to serving as a compact scene summary, this graph is used to guide structure from motion to efficiently produce 3D models of the different aspects of the landmark. The set of iconic images is also used for recognition, i.e., determining whether new test images contain the landmark. Results on three data sets consisting of tens of thousands of images demonstrate the potential of the proposed approach.",
"",
"Many successful indoor mapping techniques employ frame-to-frame matching of laser scans to produce detailed local maps as well as the closing of large loops. In this paper, we propose a framework for applying the same techniques to visual imagery. We match visual frames with large numbers of point features, using classic bundle adjustment techniques from computational vision, but we keep only relative frame pose information (a skeleton). The skeleton is a reduced nonlinear system that is a faithful approximation of the larger system and can be used to solve large loop closures quickly, as well as forming a backbone for data association and local registration. We illustrate the workings of the system with large outdoor datasets (10 km), showing large-scale loop closure and precise localization in real time.",
"In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category.",
"Large-scale 3D reconstruction has recently received much attention from the computer vision community. Bundle adjustment is a key component of 3D reconstruction problems. However, traditional bundle adjustment algorithms require a considerable amount of memory and computational resources. In this paper, we present an extremely efficient, inherently out-of-core bundle adjustment algorithm. We decouple the original problem into several submaps that have their own local coordinate systems and can be optimized in parallel. A key contribution to our algorithm is making as much progress towards optimizing the global non-linear cost function as possible using the fragments of the reconstruction that are currently in core memory. This allows us to converge with very few global sweeps (often only two) through the entire reconstruction. We present experimental results on large-scale 3D reconstruction datasets, both synthetic and real."
]
} |
1510.08012 | 1836794077 | Structure-from-motion (SfM) largely relies on feature tracking. In image sequences, if disjointed tracks caused by objects moving in and out of the field of view, occasional occlusion, or image noise are not handled well, corresponding SfM could be affected. This problem becomes severer for large-scale scenes, which typically requires to capture multiple sequences to cover the whole scene. In this paper, we propose an efficient non-consecutive feature tracking framework to match interrupted tracks distributed in different subsequences or even in different videos. Our framework consists of steps of solving the feature “dropout” problem when indistinctive structures, noise or large image distortion exists, and of rapidly recognizing and joining common features located in different subsequences. In addition, we contribute an effective segment-based coarse-to-fine SfM algorithm for robustly handling large data sets. Experimental results on challenging video data demonstrate the effectiveness of the proposed system. | Another line of research is to improve large-scale BA, which is a core component of SfM. Agarwal al @cite_1 pointed out that connectivity graphs of Internet image collections are generally much less structured and accordingly presented an inexact Newton type BA algorithm. To speed up large-scale BA, Wu al @cite_8 utilized multi-core CPUs or GPUs, and presented a parallel inexact Newton BA algorithm. Wu al @cite_38 also proposed preemptive feature matching that reduces matching image pairs, and an incremental SfM for full BA when the model is large enough. Pose graph optimization @cite_29 @cite_62 @cite_3 was also widely used in realtime SfM and SLAM @cite_58 @cite_54 , which uses the relative-pose constraints between cameras and is more efficient than full BA. | {
"cite_N": [
"@cite_38",
"@cite_62",
"@cite_8",
"@cite_29",
"@cite_54",
"@cite_1",
"@cite_3",
"@cite_58"
],
"mid": [
"2105303354",
"169439271",
"2001790138",
"2107905629",
"1612997784",
"1484371059",
"2153054365",
"612478963"
],
"abstract": [
"The time complexity of incremental structure from motion (SfM) is often known as O(n^4) with respect to the number of cameras. As bundle adjustment (BA) being significantly improved recently by preconditioned conjugate gradient (PCG), it is worth revisiting how fast incremental SfM is. We introduce a novel BA strategy that provides good balance between speed and accuracy. Through algorithm analysis and extensive experiments, we show that incremental SfM requires only O(n) time on many major steps including BA. Our method maintains high accuracy by regularly re-triangulating the feature matches that initially fail to triangulate. We test our algorithm on large photo collections and long video sequences with various settings, and show that our method offers state of the art performance for large-scale reconstructions. The presented algorithm is available as part of VisualSFM at http: homes.cs.washington.edu ccwu vsfm .",
"State of the art visual SLAM systems have recently been presented which are capable of accurate, large-scale and real-time performance, but most of these require stereo vision. Important application areas in robotics and beyond open up if similar performance can be demonstrated using monocular vision, since a single camera will always be cheaper, more compact and easier to calibrate than a multi-camera rig. With high quality estimation, a single camera moving through a static scene of course effectively provides its own stereo geometry via frames distributed over time. However, a classic issue with monocular visual SLAM is that due to the purely projective nature of a single camera, motion estimates and map structure can only be recovered up to scale. Without the known inter-camera distance of a stereo rig to serve as an anchor, the scale of locally constructed map portions and the corresponding motion estimates is therefore liable to drift over time. In this paper we describe a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input. In particular, we present a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures. Especially, we describe the Lie group of similarity transformations and its relation to the corresponding Lie algebra. We also present in detail the system’s new image processing front-end which is able accurately to track hundreds of features per frame, and a filter-based approach for feature initialisation within keyframe-based SLAM. Our approach is proven via large-scale simulation and real-world experiments where a camera completes large looped trajectories.",
"We present the design and implementation of new inexact Newton type Bundle Adjustment algorithms that exploit hardware parallelism for efficiently solving large scale 3D scene reconstruction problems. We explore the use of multicore CPU as well as multicore GPUs for this purpose. We show that overcoming the severe memory and bandwidth limitations of current generation GPUs not only leads to more space efficient algorithms, but also to surprising savings in runtime. Our CPU based system is up to ten times and our GPU based system is up to thirty times faster than the current state of the art methods [1], while maintaining comparable convergence behavior. The code and additional results are available at http: grail.cs. washington.edu projects mcba.",
"A robot exploring an environment can estimate its own motion and the relative positions of features in the environment. Simultaneous localization and mapping (SLAM) algorithms attempt to fuse these estimates to produce a map and a robot trajectory. The constraints are generally non-linear, thus SLAM can be viewed as a non-linear optimization problem. The optimization can be difficult, due to poor initial estimates arising from odometry data, and due to the size of the state space. We present a fast non-linear optimization algorithm that rapidly recovers the robot trajectory, even when given a poor initial estimate. Our approach uses a variant of stochastic gradient descent on an alternative state-space representation that has good stability and computational properties. We compare our algorithm to several others, using both real and synthetic data sets",
"This paper presents ORB-SLAM, a feature-based monocular simultaneous localization and mapping (SLAM) system that operates in real time, in small and large indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.",
"We present the design and implementation of a new inexact Newton type algorithm for solving large-scale bundle adjustment problems with tens of thousands of images. We explore the use of Conjugate Gradients for calculating the Newton step and its performance as a function of some simple and computationally efficient preconditioners. We show that the common Schur complement trick is not limited to factorization-based methods and that it can be interpreted as a form of preconditioning. Using photos from a street-side dataset and several community photo collections, we generate a variety of bundle adjustment problems and use them to evaluate the performance of six different bundle adjustment algorithms. Our experiments show that truncated Newton methods, when paired with relatively simple preconditioners, offer state of the art performance for large-scale bundle adjustment. The code, test problems and detailed performance data are available at http: grail.cs.washington.edu projects bal.",
"Many popular problems in robotics and computer vision including various types of simultaneous localization and mapping (SLAM) or bundle adjustment (BA) can be phrased as least squares optimization of an error function that can be represented by a graph. This paper describes the general structure of such problems and presents g2o, an open-source C++ framework for optimizing graph-based nonlinear error functions. Our system has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. We provide evaluations on a wide range of real-world and simulated datasets. The results demonstrate that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.",
"We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU."
]
} |
1510.08012 | 1836794077 | Structure-from-motion (SfM) largely relies on feature tracking. In image sequences, if disjointed tracks caused by objects moving in and out of the field of view, occasional occlusion, or image noise are not handled well, corresponding SfM could be affected. This problem becomes severer for large-scale scenes, which typically requires to capture multiple sequences to cover the whole scene. In this paper, we propose an efficient non-consecutive feature tracking framework to match interrupted tracks distributed in different subsequences or even in different videos. Our framework consists of steps of solving the feature “dropout” problem when indistinctive structures, noise or large image distortion exists, and of rapidly recognizing and joining common features located in different subsequences. In addition, we contribute an effective segment-based coarse-to-fine SfM algorithm for robustly handling large data sets. Experimental results on challenging video data demonstrate the effectiveness of the proposed system. | Most existing SfM approaches achieve reconstruction in an incremental way, which may risk drifting or local minima when dealing with large-scale image sets. Crandall al @cite_24 proposed combining discrete and continuous optimization to yield a better initialization for BA. By formulating SfM estimation as a labeling problem, belief propagation is employed to estimate camera parameters and 3D points. In the continuous step, Levenberg-Marquardt nonlinear optimization with additional constraints is used. This method is restricted to urban scenes, and assumes that the vertical vanishing point can be detected for rotation estimation, similar to the method proposed by Sinha al @cite_55 . It also needs to leverage geotag contained in the collected images and takes complex discrete optimization. In contrast, our segment-based scheme can run on a common desktop PC with limited memory, even for large video data. | {
"cite_N": [
"@cite_24",
"@cite_55"
],
"mid": [
"2013472030",
"117099968"
],
"abstract": [
"Recent work in structure from motion (SfM) has successfully built 3D models from large unstructured collections of images downloaded from the Internet. Most approaches use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the number of images grows, and can drift or fall into bad local minima. We present an alternative formulation for SfM based on finding a coarse initial solution using a hybrid discrete-continuous optimization, and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and the points, including noisy geotags and vanishing point estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it can produce models that are similar to or better than those produced with incremental bundle adjustment, but more robustly and in a fraction of the time.",
"We present a new structure from motion (Sfm) technique based on point and vanishing point (VP) matches in images. First, all global camera rotations are computed from VP matches as well as relative rotation estimates obtained from pairwise image matches. A new multi-staged linear technique is then used to estimate all camera translations and 3D points simultaneously. The proposed method involves first performing pairwise reconstructions, then robustly aligning these in pairs, and finally aligning all of them globally by simultaneously estimating their unknown relative scales and translations. In doing so, measurements inconsistent in three views are efficiently removed. Unlike sequential Sfm, the proposed method treats all images equally, is easy to parallelize and does not require intermediate bundle adjustments. There is also a reduction of drift and significant speedups up to two order of magnitude over sequential Sfm. We compare our method with a standard Sfm pipeline [1] and demonstrate that our linear estimates are accurate on a variety of datasets, and can serve as good initializations for final bundle adjustment. Because we exploit VPs when available, our approach is particularly well-suited to the reconstruction of man-made scenes."
]
} |
1510.08012 | 1836794077 | Structure-from-motion (SfM) largely relies on feature tracking. In image sequences, if disjointed tracks caused by objects moving in and out of the field of view, occasional occlusion, or image noise are not handled well, corresponding SfM could be affected. This problem becomes severer for large-scale scenes, which typically requires to capture multiple sequences to cover the whole scene. In this paper, we propose an efficient non-consecutive feature tracking framework to match interrupted tracks distributed in different subsequences or even in different videos. Our framework consists of steps of solving the feature “dropout” problem when indistinctive structures, noise or large image distortion exists, and of rapidly recognizing and joining common features located in different subsequences. In addition, we contribute an effective segment-based coarse-to-fine SfM algorithm for robustly handling large data sets. Experimental results on challenging video data demonstrate the effectiveness of the proposed system. | Real-time monocular SLAM methods @cite_42 @cite_30 @cite_58 @cite_54 typically perform tracking and mapping in parallel threads. The methods of @cite_58 @cite_54 can close loops efficiently for large-scale scenes. However, they could still have difficulty in directly handling multiple sequences, as demonstrated in Figs. and . | {
"cite_N": [
"@cite_30",
"@cite_54",
"@cite_42",
"@cite_58"
],
"mid": [
"2068763856",
"1612997784",
"2151290401",
"612478963"
],
"abstract": [
"We present a novel real-time monocular SLAM system which can robustly work in dynamic environments. Different to the traditional methods, our system allows parts of the scene to be dynamic or the whole scene to gradually change. The key contribution is that we propose a novel online keyframe representation and updating method to adaptively model the dynamic environments, where the appearance or structure changes can be effectively detected and handled. We reliably detect the changed features by projecting them from the keyframes to current frame for appearance and structure comparison. The appearance change due to occlusions also can be reliably detected and handled. The keyframes with large changed areas will be replaced by newly selected frames. In addition, we propose a novel prior-based adaptive RANSAC algorithm (PARSAC) to efficiently remove outliers even when the inlier ratio is rather low, so that the camera pose can be reliably estimated even in very challenging situations. Experimental results demonstrate that the proposed system can robustly work in dynamic environments and outperforms the state-of-the-art SLAM systems (e.g. PTAM).",
"This paper presents ORB-SLAM, a feature-based monocular simultaneous localization and mapping (SLAM) system that operates in real time, in small and large indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.",
"This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems.",
"We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU."
]
} |
1510.07867 | 2949624713 | For people first impressions of someone are of determining importance. They are hard to alter through further information. This begs the question if a computer can reach the same judgement. Earlier research has already pointed out that age, gender, and average attractiveness can be estimated with reasonable precision. We improve the state-of-the-art, but also predict - based on someone's known preferences - how much that particular person is attracted to a novel face. Our computational pipeline comprises a face detector, convolutional neural networks for the extraction of deep features, standard support vector regression for gender, age and facial beauty, and - as the main novelties - visual regularized collaborative filtering to infer inter-person preferences as well as a novel regression technique for handling visual queries without rating history. We validate the method using a very large dataset from a dating site as well as images from celebrities. Our experiments yield convincing results, i.e. we predict 76 of the ratings correctly solely based on an image, and reveal some sociologically relevant conclusions. We also validate our collaborative filtering solution on the standard MovieLens rating dataset, augmented with movie posters, to predict an individual's movie rating. We demonstrate our algorithms on howhot.io which went viral around the Internet with more than 50 million pictures evaluated in the first month. | Image features. Instead of handcrafted features like SIFT, HoG, or Gabor filters, we use learned features obtained using neural networks @cite_9 @cite_0 @cite_18 . The latter have shown impressive performance in recent years. Such features have already been used for age and gender estimation in @cite_5 @cite_31 . | {
"cite_N": [
"@cite_18",
"@cite_9",
"@cite_0",
"@cite_5",
"@cite_31"
],
"mid": [
"1686810756",
"2102605133",
"",
"2239239723",
"1972960842"
],
"abstract": [
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"",
"In this paper we tackle the estimation of apparent age in still face images with deep learning. Our convolutional neural networks (CNNs) use the VGG-16 architecture [13] and are pretrained on ImageNet for image classification. In addition, due to the limited number of apparent age annotated images, we explore the benefit of finetuning over crawled Internet face images with available age. We crawled 0.5 million images of celebrities from IMDB and Wikipedia that we make public. This is the largest public dataset for age prediction to date. We pose the age regression problem as a deep classification problem followed by a softmax expected value refinement and show improvements over direct regression training of CNNs. Our proposed method, Deep EXpectation (DEX) of apparent age, first detects the face in the test image and then extracts the CNN predictions from an ensemble of 20 networks on the cropped face. The CNNs of DEX were finetuned on the crawled images and then on the provided images with apparent age annotations. DEX does not use explicit facial landmarks. Our DEX is the winner (1st place) of the ChaLearn LAP 2015 challenge on apparent age estimation with 115 registered teams, significantly outperforming the human reference.",
"Human age provides key demographic information. It is also considered as an important soft biometric trait for human identification or search. Compared to other pattern recognition problems (e.g., object classification, scene categorization), age estimation is much more challenging since the difference between facial images with age variations can be more subtle and the process of aging varies greatly among different individuals. In this work, we investigate deep learning techniques for age estimation based on the convolutional neural network (CNN). A new framework for age feature extraction based on the deep learning model is built. Compared to previous models based on CNN, we use feature maps obtained in different layers for our estimation work instead of using the feature obtained at the top layer. Additionally, a manifold learning algorithm is incorporated in the proposed scheme and this improves the performance significantly. Furthermore, we also evaluate different classification and regression schemes in estimating age using the deep learned aging pattern (DLA). To the best of our knowledge, this is the first time that deep learning technique is introduced and applied to solve the age estimation problem. Experimental results on two datasets show that the proposed approach is significantly better than the state-of-the-art."
]
} |
1510.07867 | 2949624713 | For people first impressions of someone are of determining importance. They are hard to alter through further information. This begs the question if a computer can reach the same judgement. Earlier research has already pointed out that age, gender, and average attractiveness can be estimated with reasonable precision. We improve the state-of-the-art, but also predict - based on someone's known preferences - how much that particular person is attracted to a novel face. Our computational pipeline comprises a face detector, convolutional neural networks for the extraction of deep features, standard support vector regression for gender, age and facial beauty, and - as the main novelties - visual regularized collaborative filtering to infer inter-person preferences as well as a novel regression technique for handling visual queries without rating history. We validate the method using a very large dataset from a dating site as well as images from celebrities. Our experiments yield convincing results, i.e. we predict 76 of the ratings correctly solely based on an image, and reveal some sociologically relevant conclusions. We also validate our collaborative filtering solution on the standard MovieLens rating dataset, augmented with movie posters, to predict an individual's movie rating. We demonstrate our algorithms on howhot.io which went viral around the Internet with more than 50 million pictures evaluated in the first month. | Demographics estimation. Multiple demographic properties such as age, gender, and ethnicity have been extracted from faces. A survey on age prediction is provided by Fu al @cite_40 and on gender classification by Ng al @cite_28 . Kumar al @cite_32 investigate image attribute' classifiers in the context of face verification. Some approaches need face shape models or facial landmarks @cite_22 @cite_15 , others are meant to work in the wild @cite_30 @cite_7 @cite_5 @cite_31 but still assume face localization. Generally, the former approaches reach better performance as they use additional information. The errors in model fitting or landmark localization are critical. Moreover, they require supervised training, detailed annotated datasets, and higher computation times. On top of the extracted image features a machine learning approach such as SVM @cite_8 is employed to learn a demographics prediction model which is then applied to new queries. | {
"cite_N": [
"@cite_30",
"@cite_31",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_32",
"@cite_40",
"@cite_5",
"@cite_15"
],
"mid": [
"2009088607",
"1972960842",
"2036565334",
"2075875861",
"2148603752",
"1828472411",
"",
"2105026179",
"2239239723",
"2041649634"
],
"abstract": [
"In this paper, we propose an ordinal hyperplane ranking algorithm called OHRank, which estimates human ages via facial images. The design of the algorithm is based on the relative order information among the age labels in a database. Each ordinal hyperplane separates all the facial images into two groups according to the relative order, and a cost-sensitive property is exploited to find better hyperplanes based on the classification costs. Human ages are inferred by aggregating a set of preferences from the ordinal hyperplanes with their cost sensitivities. Our experimental results demonstrate that the proposed approach outperforms conventional multiclass-based and regression-based approaches as well as recently developed ranking-based age estimation approaches.",
"Human age provides key demographic information. It is also considered as an important soft biometric trait for human identification or search. Compared to other pattern recognition problems (e.g., object classification, scene categorization), age estimation is much more challenging since the difference between facial images with age variations can be more subtle and the process of aging varies greatly among different individuals. In this work, we investigate deep learning techniques for age estimation based on the convolutional neural network (CNN). A new framework for age feature extraction based on the deep learning model is built. Compared to previous models based on CNN, we use feature maps obtained in different layers for our estimation work instead of using the feature obtained at the top layer. Additionally, a manifold learning algorithm is incorporated in the proposed scheme and this improves the performance significantly. Furthermore, we also evaluate different classification and regression schemes in estimating age using the deep learned aging pattern (DLA). To the best of our knowledge, this is the first time that deep learning technique is introduced and applied to solve the age estimation problem. Experimental results on two datasets show that the proposed approach is significantly better than the state-of-the-art.",
"There has been a growing interest in automatic age estimation from facial images due to a variety of potential applications in law enforcement, security control, and human-computer interaction. However, despite advances in automatic age estimation, it remains a challenging problem. This is because the face aging process is determined not only by intrinsic factors, e.g. genetic factors, but also by extrinsic factors, e.g. lifestyle, expression, and environment. As a result, different people with the same age can have quite different appearances due to different rates of facial aging. We propose a hierarchical approach for automatic age estimation, and provide an analysis of how aging influences individual facial components. Experimental results on the FG-NET, MORPH Album2, and PCSO databases show that eyes and nose are more informative than the other facial components in automatic age estimation. We also study the ability of humans to estimate age using data collected via crowdsourcing, and show that the cumulative score (CS) within 5-year mean absolute error (MAE) of our method is better than the age estimates provided by humans.",
"A number of computer vision problems such as human age estimation, crowd density estimation and body face pose (view angle) estimation can be formulated as a regression problem by learning a mapping function between a high dimensional vector-formed feature input and a scalar-valued output. Such a learning problem is made difficult due to sparse and imbalanced training data and large feature variations caused by both uncertain viewing conditions and intrinsic ambiguities between observable visual features and the scalar values to be estimated. Encouraged by the recent success in using attributes for solving classification problems with sparse training data, this paper introduces a novel cumulative attribute concept for learning a regression model when only sparse and imbalanced data are available. More precisely, low-level visual features extracted from sparse and imbalanced image samples are mapped onto a cumulative attribute space where each dimension has clearly defined semantic interpretation (a label) that captures how the scalar output value (e.g. age, people count) changes continuously and cumulatively. Extensive experiments show that our cumulative attribute framework gains notable advantage on accuracy for both age estimation and crowd counting when compared against conventional regression models, especially when the labelled training data is sparse with imbalanced sampling.",
"A comprehensive look at learning and generalization theory. The statistical theory of learning and generalization concerns the problem of choosing desired functions on the basis of empirical data. Highly applicable to a variety of computer science and robotics fields, this book offers lucid coverage of the theory as a whole. Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.",
"Gender is an important demographic attribute of people. This paper provides a survey of human gender recognition in computer vision. A review of approaches exploiting information from face and whole body (either from a still image or gait sequence) is presented. We highlight the challenges faced and survey the representative methods of these approaches. Based on the results, good performance have been achieved for datasets captured under controlled environments, but there is still much work that can be done to improve the robustness of gender recognition under real-life environments.",
"",
"Human age, as an important personal trait, can be directly inferred by distinct patterns emerging from the facial appearance. Derived from rapid advances in computer graphics and machine vision, computer-based age synthesis and estimation via faces have become particularly prevalent topics recently because of their explosively emerging real-world applications, such as forensic art, electronic customer relationship management, security control and surveillance monitoring, biometrics, entertainment, and cosmetology. Age synthesis is defined to rerender a face image aesthetically with natural aging and rejuvenating effects on the individual face. Age estimation is defined to label a face image automatically with the exact age (year) or the age group (year range) of the individual face. Because of their particularity and complexity, both problems are attractive yet challenging to computer-based application system designers. Large efforts from both academia and industry have been devoted in the last a few decades. In this paper, we survey the complete state-of-the-art techniques in the face image-based age synthesis and estimation topics. Existing models, popular algorithms, system performances, technical difficulties, popular face aging databases, evaluation protocols, and promising future directions are also provided with systematic discussions.",
"In this paper we tackle the estimation of apparent age in still face images with deep learning. Our convolutional neural networks (CNNs) use the VGG-16 architecture [13] and are pretrained on ImageNet for image classification. In addition, due to the limited number of apparent age annotated images, we explore the benefit of finetuning over crawled Internet face images with available age. We crawled 0.5 million images of celebrities from IMDB and Wikipedia that we make public. This is the largest public dataset for age prediction to date. We pose the age regression problem as a deep classification problem followed by a softmax expected value refinement and show improvements over direct regression training of CNNs. Our proposed method, Deep EXpectation (DEX) of apparent age, first detects the face in the test image and then extracts the CNN predictions from an ensemble of 20 networks on the cropped face. The CNNs of DEX were finetuned on the crawled images and then on the provided images with apparent age annotations. DEX does not use explicit facial landmarks. Our DEX is the winner (1st place) of the ChaLearn LAP 2015 challenge on apparent age estimation with 115 registered teams, significantly outperforming the human reference.",
"In this paper, we review the major approaches to multimodal human-computer interaction, giving an overview of the field from a computer vision perspective. In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging applications for multimodal human-computer interaction (MMHCI) research."
]
} |
1510.07880 | 2296125968 | A fundamental component of networking infras- tructure is the policy, used in routing tables and firewalls. Accordingly, there has been extensive study of policies. However, the theory of such policies indicates that the size of the decision tree for a policy is very large ( O((2n)d), where the policy has n rules and examines d features of packets). If this was indeed the case, the existing algorithms to detect anomalies, conflicts, and redundancies would not be tractable for practical policies (say, n = 1000 and d = 10). In this paper, we clear up this apparent paradox. Using the concept of 'rules in play', we calculate the actual upper bound on the size of the decision tree, and demonstrate how three other factors - narrow fields, singletons, and all-matches make the problem tractable in practice. We also show how this concept may be used to solve an open problem: pruning a policy to the minimum possible number of rules, without changing its meaning. | Our work not only advances the theory of policies by providing an exact bound on the size of the policy decision diagram @cite_14 , but also proposes new metrics for the complexity of a policy, and shows why the algorithms for fast resolution and verification are tractable in practice. Our new metrics (fieldwidth, allprob and oneprob) make it possible to answer if a policy will have a decision diagram of tractable size. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2096244038"
],
"abstract": [
"Firewalls are crucial elements in network security, and have been widely deployed in most businesses and institutions for securing private networks. The function of a firewall is to examine each incoming and outgoing packet and decide whether to accept or to discard the packet based on its policy. Due to the lack of tools for analyzing firewall policies, most firewalls on the Internet have been plagued with policy errors. A firewall policy error either creates security holes that will allow malicious traffic to sneak into a private network or blocks legitimate traffic and disrupts normal business processes, which in turn could lead to irreparable, if not tragic, consequences. Because a firewall may have a large number of rules and the rules often conflict, understanding and analyzing the function of a firewall has been known to be notoriously difficult. An effective way to assist firewall administrators to understand and analyze the function of their firewalls is by issuing queries. An example of a firewall query is \"Which computers in the private network can receive packets from a known malicious host in the outside Internet?rdquo Two problems need to be solved in order to make firewall queries practically useful: how to describe a firewall query and how to process a firewall query. In this paper, we first introduce a simple and effective SQL-like query language, called the Structured Firewall Query Language (SFQL), for describing firewall queries. Second, we give a theorem, called the Firewall Query Theorem, as the foundation for developing firewall query processing algorithms. Third, we present an efficient firewall query processing algorithm, which uses decision diagrams as its core data structure. Fourth, we propose methods for optimizing firewall query results. Finally, we present methods for performing the union, intersect, and minus operations on firewall query results. Our experimental results show that our firewall query processing algorithm is very efficient: it takes less than 10 milliseconds to process a query over a firewall that has up to 10,000 rules."
]
} |
1510.07880 | 2296125968 | A fundamental component of networking infras- tructure is the policy, used in routing tables and firewalls. Accordingly, there has been extensive study of policies. However, the theory of such policies indicates that the size of the decision tree for a policy is very large ( O((2n)d), where the policy has n rules and examines d features of packets). If this was indeed the case, the existing algorithms to detect anomalies, conflicts, and redundancies would not be tractable for practical policies (say, n = 1000 and d = 10). In this paper, we clear up this apparent paradox. Using the concept of 'rules in play', we calculate the actual upper bound on the size of the decision tree, and demonstrate how three other factors - narrow fields, singletons, and all-matches make the problem tractable in practice. We also show how this concept may be used to solve an open problem: pruning a policy to the minimum possible number of rules, without changing its meaning. | Finally, our work impacts the study of high-performance architecture for fast packet resolution. High-throughput systems, such as backbone routers, make use of special hardware - ternary content addressable memory @cite_2 , ordinary RAM, pipelining systems @cite_13 , and so on. But these systems are very expensive, as well as limited in the size of policies they can accommodate. Therefore, there is active research on algorithms to optimize policies, such as Liu's TCAM Razor @cite_2 . This paper shows how, using our concept of rules in play, we can improve upon these algorithms to create truly minimum policies (rather than just locally minimal ones). | {
"cite_N": [
"@cite_13",
"@cite_2"
],
"mid": [
"2102840128",
"2162597606"
],
"abstract": [
"Pipelined ASIC architectures are increasingly being used in forwarding engines for high-speed IP routers. We explore optimization issues in the design of memory-efficient data structures that support fast incremental updates in such forwarding engines. Our solution aims to balance the memory utilization across the multiple pipeline stages. We also propose a series of optimizations that minimize the disruption to the forwarding process caused by route updates. These optimizations reduce the update overheads by over a factor of two for a variety of different core routing tables and update traces.",
"One popular hardware device for performing fast routing lookups and packet classification is a ternary content-addressable memory (TCAM). This paper proposes two algorithms to manage the TCAM such that incremental update times remain small in the worst case."
]
} |
1510.07712 | 1957740064 | We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively. | The methods for NMT @cite_44 @cite_56 @cite_1 @cite_14 @cite_26 @cite_7 in computational linguistics generally follow the encoder-decoder paradigm. An encoder maps the source sentence to a fixed-length feature vector in the embedding space. A decoder then conditions on this vector to generate a translated sentence in the target language. On top of this paradigm, several improvements were proposed. Bahdanau al @cite_1 proposed a soft attention model to do alignment during translation, so that their approach is able to focus on different parts of the source sentence when generating different translated words. Li al @cite_26 and Lin al @cite_7 employed hierarchical RNN to model the hierarchy of a document. Our approach is much similar to a neural machine translator with a simplified attention model and a hierarchical architecture. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_7",
"@cite_1",
"@cite_56",
"@cite_44"
],
"mid": [
"2949888546",
"2950752421",
"2251849926",
"2133564696",
"2950635152",
"1753482797"
],
"abstract": [
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.",
"Natural language generation of coherent long texts like paragraphs or longer documents is a challenging problem for recurrent networks models. In this paper, we explore an important step toward this generation task: training an LSTM (Long-short term memory) auto-encoder to preserve and reconstruct multi-sentence paragraphs. We introduce an LSTM model that hierarchically builds an embedding for a paragraph from embeddings for sentences and words, then decodes this embedding to reconstruct the original paragraph. We evaluate the reconstructed paragraph using standard metrics like ROUGE and Entity Grid, showing that neural models are able to encode texts in a way that preserve syntactic, semantic, and discourse coherence. While only a first step toward generating coherent text units from neural models, our work has the potential to significantly impact natural language generation and summarization Code for the three models described in this paper can be found at www.stanford.edu jiweil .",
"This paper proposes a novel hierarchical recurrent neural network language model (HRNNLM) for document modeling. After establishing a RNN to capture the coherence between sentences in a document, HRNNLM integrates it as the sentence history information into the word level RNN to predict the word sequence with cross-sentence contextual information. A two-step training approach is designed, in which sentence-level and word-level language models are approximated for the convergence in a pipeline style. Examined by the standard sentence reordering scenario, HRNNLM is proved for its better accuracy in modeling the sentence coherence. And at the word level, experimental results also indicate a significant lower model perplexity, followed by a practical better translation result when applied to a Chinese-English document translation reranking task.",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.",
"We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43 lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations."
]
} |
1510.07712 | 1957740064 | We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively. | The first attempt of visual-to-text translation using RNNs was seen in the work of image captioning @cite_34 @cite_52 @cite_53 @cite_43 @cite_22 , which can be treated as a special case of video captioning when each video has a single frame and no temporal structure. As a result, image captioning only requires computing object appearance features, but not action motion features. The amount of data handled by an image captioning method is much (dozens of times) less than that handled by a video captioning method. The overall structure of an image captioner (instance-to-sequence) is also usually simpler than that of a video captioner (sequence-to-sequence). Some other methods, such as Park and Kim @cite_50 , addressed the problem of retrieving sentences from training database to describe a sequence of images. They proposed a local coherence model for fluent sentence transitions, which serves a similar purpose of our paragraph generator. | {
"cite_N": [
"@cite_22",
"@cite_53",
"@cite_52",
"@cite_43",
"@cite_50",
"@cite_34"
],
"mid": [
"2122180654",
"2308045930",
"1527575280",
"2951912364",
"2183386595",
"2962706528"
],
"abstract": [
"In this paper we explore the bi-directional mapping between images and their sentence-based descriptions. We propose learning this mapping using a recurrent neural network. Unlike previous approaches that map both sentences and images to a common embedding, we enable the generation of novel sentences given an image. Using the same model, we can also reconstruct the visual features associated with an image given its visual description. We use a novel recurrent visual memory that automatically learns to remember long-term visual concepts to aid in both sentence generation and visual feature reconstruction. We evaluate our approach on several tasks. These include sentence generation, sentence retrieval and image retrieval. State-of-the-art results are shown for the task of generating novel image descriptions. When compared to human generated captions, our automatically generated captions are preferred by humans over @math of the time. Results are better than or comparable to state-of-the-art results on the image and sentence retrieval tasks for methods using similar visual features.",
"",
"Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - \"blue\" + \"red\" is near images of red cars. Sample captions generated for 800 images are made available for comparison.",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.",
"We propose an approach for retrieving a sequence of natural sentences for an image stream. Since general users often take a series of pictures on their special moments, it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt with the relation between a single image and a single natural sentence, our work extends both input and output dimension to a sequence of images and a sequence of sentences. To this end, we design a multimodal architecture called coherence recurrent convolutional network (CRCN), which consists of convolutional neural networks, bidirectional recurrent neural networks, and an entity-based local coherence model. Our approach directly learns from vast user-generated resource of blog posts as text-image parallel training data. We demonstrate that our approach outperforms other state-of-the-art candidate methods, using both quantitative measures (e.g. BLEU and top-K recall) and user studies via Amazon Mechanical Turk.",
"Abstract: In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html ."
]
} |
1510.07712 | 1957740064 | We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively. | The very early video captioning method @cite_13 based on RNNs extends the image captioning methods by simply average pooling the video frames. Then the problem becomes exactly the same as image captioning. However, this strategy works only for short video clips where there is only one major event, usually appearing in one video shot from the beginning to the end. To avoid this issue, more sophisticated ways of encoding video features were proposed in later work, using either a recurrent encoder @cite_55 @cite_27 @cite_25 or an attention model @cite_19 . Our sentence generator is closely related to Yao al @cite_19 , in that we also use attention mechanism to selectively focus on video features. One difference between our framework and theirs is that we additionally exploit spatial attention. The other difference is that after weighing video features with attention weights, we do not condition the hidden state of our recurrent layer on the weighted features (). | {
"cite_N": [
"@cite_55",
"@cite_19",
"@cite_27",
"@cite_13",
"@cite_25"
],
"mid": [
"2951183276",
"2950307714",
"2950019618",
"2136036867",
"1714639292"
],
"abstract": [
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.",
"Real-world videos often have complex dynamics; and methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem, we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).",
"Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.",
"Generating natural language descriptions for in-the-wild videos is a challenging task. Most state-of-the-art methods for solving this problem borrow existing deep convolutional neural network (CNN) architectures (AlexNet, GoogLeNet) to extract a visual representation of the input video. However, these deep CNN architectures are designed for single-label centered-positioned object classification. While they generate strong semantic features, they have no inherent structure allowing them to detect multiple objects of different sizes and locations in the frame. Our paper tries to solve this problem by integrating the base CNN into several fully convolutional neural networks (FCNs) to form a multi-scale network that handles multiple receptive field sizes in the original image. FCNs, previously applied to image segmentation, can generate class heat-maps efficiently compared to sliding window mechanisms, and can easily handle multiple scales. To further handle the ambiguity over multiple objects and locations, we incorporate the Multiple Instance Learning mechanism (MIL) to consider objects in different positions and at different scales simultaneously. We integrate our multi-scale multi-instance architecture with a sequence-to-sequence recurrent neural network to generate sentence descriptions based on the visual representation. Ours is the first end-to-end trainable architecture that is capable of multi-scale region processing. Evaluation on a Youtube video dataset shows the advantage of our approach compared to the original single-scale whole frame CNN model. Our flexible and efficient architecture can potentially be extended to support other video processing tasks."
]
} |
1510.07380 | 2802303374 | Simultaneous localization and Planning (SLAP) is a crucial ability for an autonomous robot operating under uncertainty. In its most general form, SLAP induces a continuous POMDP (partially-observable Markov decision process), which needs to be repeatedly solved online. This paper addresses this problem and proposes a dynamic replanning scheme in belief space. The underlying POMDP, which is continuous in state, action, and observation space, is approximated offline via sampling-based methods, but operates in a replanning loop online to admit local improvements to the coarse offline policy. This construct enables the proposed method to combat changing environments and large localization errors, even when the change alters the homotopy class of the optimal trajectory. It further outperforms the state-of-the-art FIRM (Feedback-based Information RoadMap) method by eliminating unnecessary stabilization steps. Applying belief space planning to physical systems brings with it a plethora of challenges. A key focus of this paper is to implement the proposed planner on a physical robot and show the SLAP solution performance under uncertainty, in changing environments and in the presence of large disturbances, such as a kidnapped robot situation. | Active localization Solving the planning problem alongside localization and mapping has been the topic of several recent works (e.g, @cite_48 , @cite_40 , @cite_1 , @cite_70 ). The method in @cite_22 presents an approach to uncertainty-constrained simultaneous planning, localization and mapping for unknown environments and in @cite_18 , the authors propose an approach to actively explore unknown maps while imposing a hard constraint on the localization uncertainty at the end of the planning horizon. Our work assumes the environment map is known, formulates the problem in its most general form (POMDP), and focuses on online replanning in the presence of obstacles (possibly changing). | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_70",
"@cite_48",
"@cite_1",
"@cite_40"
],
"mid": [
"2033051917",
"1826354755",
"2151175624",
"1547347985",
"1972624884",
"2161123599"
],
"abstract": [
"",
"We investigate the problem of planning under uncertainty, with application to mobile robotics. We propose a probabilistic framework in which the robot bases its decisions on the generalized belief, which is a probabilistic description of its own state and of external variables of interest. The approach naturally leads to a dual-layer architecture: an inner estimation layer, which performs inference to predict the outcome of possible decisions; and an outer decisional layer which is in charge of deciding the best action to undertake. Decision making is entrusted to a model predictive control MPC scheme. The formulation is valid for general cost functions and does not discretize the state or control space, enabling planning in continuous domain. Moreover, it allows to relax the assumption of maximum likelihood observations: predicted measurements are treated as random variables, and binary random variables are used to model the event that a measurement is actually taken by the robot. We successfully apply our approach to the problem of uncertainty-constrained exploration, in which the robot has to perform tasks in an unknown environment, while maintaining localization uncertainty within given bounds. We present an extensive numerical analysis of the proposed approach and compare it against related work. In practice, our planning approach produces smooth and natural trajectories and is able to impose soft upper bounds on the uncertainty. Finally, we exploit the results of this analysis to identify current limitations and show that the proposed framework can accommodate several desirable extensions.",
"This paper reports on an integrated navigation algorithm for the visual simultaneous localization and mapping (SLAM) robotic area coverage problem. In the robotic area coverage problem, the goal is to explore and map a given target area in a reasonable amount of time. This goal necessitates the use of minimally redundant overlap trajectories for coverage efficiency; however, visual SLAM's navigation estimate will inevitably drift over time in the absence of loop-closures. Therefore, efficient area coverage and good SLAM navigation performance represent competing objectives. To solve this decision-making problem, we introduce perception-driven navigation (PDN), an integrated navigation algorithm that automatically balances between exploration and revisitation using a reward framework. This framework accounts for vehicle localization uncertainty, area coverage performance, and the identification of good candidate regions in the environment for loop-closure. Results are shown for a hybrid simulation using synthetic and real imagery from an autonomous underwater ship hull inspection application.",
"In this paper we investigate the monotonicity of various optimality criteria during the exploration phase of an active SLAM algorithm. Optimality criteria such as A-opt, D-opt or E-opt are used in active SLAM to account for uncertainty in the map or the robot's pose, and these criteria are usually part of utility functions which help active SLAM algorithms decide where the robot should move next. The monotonicity of the optimality criteria is of utmost importance. During the exploration phase, i.e. when the robot is traversing new territory or cannot perform a loop closure, the most common way of estimating the pose of the robot is through dead-reckoning. Correctly accounting for the uncertainty is important for an active SLAM algorithm and in particular for a dead-reckoning scenario, where by definition the uncertainty in the robot's pose grows. If monotonicity does not hold in this scenario, active SLAM algorithms can execute actions under the false belief that the uncertainty has reduced. We show analytically and experimentally some conditions in which the A-opt and E-opt criteria lose monotonicity in a dead-reckoning scenario, where the propagation of the robot's pose is done using a linearized framework. We also show analytically and experimentally that under the same conditions the D-opt does not lose monotonicity and, in general for the linearized framework under consideration, D-opt does not break monotonicity.",
"Autonomous exploration under uncertain robot location requires the robot to use active strategies to trade-off between the contrasting tasks of exploring the unknown scenario and satisfying given constraints on the admissible uncertainty in map estimation. The corresponding problem, namely active SLAM (Simultaneous Localization and Mapping) and exploration, has received a large attention from the robotic community for its relevance in mobile robotics applications. In this work we tackle the problem of active SLAM and exploration with Rao-Blackwellized Particle Filters. We propose an application of Kullback-Leibler divergence for the purpose of evaluating the particle-based SLAM posterior approximation. This metric is then applied in the definition of the expected information from a policy, which allows the robot to autonomously decide between exploration and place revisiting actions (i.e., loop closing). Extensive tests are performed in typical indoor and office environments and on well-known benchmarking scenarios belonging to SLAM literature, with the purpose of comparing the proposed approach with the state-of-the-art techniques and to evaluate the maturity of truly autonomous navigation systems based on particle filtering.",
"In this paper, we consider the computation of the D-optimality criterion as a metric for the uncertainty of a SLAM system. Properties regarding the use of this uncertainty criterion in the active SLAM context are highlighted, and comparisons against the A-optimality criterion and entropy are presented. This paper shows that contrary to what has been previously reported, the D-optimality criterion is indeed capable of giving fruitful information as a metric for the uncertainty of a robot performing SLAM. Finally, through various experiments with simulated and real robots, we support our claims and show that the use of D-opt has desirable effects in various SLAM related tasks such as active mapping and exploration."
]
} |
1510.07380 | 2802303374 | Simultaneous localization and Planning (SLAP) is a crucial ability for an autonomous robot operating under uncertainty. In its most general form, SLAP induces a continuous POMDP (partially-observable Markov decision process), which needs to be repeatedly solved online. This paper addresses this problem and proposes a dynamic replanning scheme in belief space. The underlying POMDP, which is continuous in state, action, and observation space, is approximated offline via sampling-based methods, but operates in a replanning loop online to admit local improvements to the coarse offline policy. This construct enables the proposed method to combat changing environments and large localization errors, even when the change alters the homotopy class of the optimal trajectory. It further outperforms the state-of-the-art FIRM (Feedback-based Information RoadMap) method by eliminating unnecessary stabilization steps. Applying belief space planning to physical systems brings with it a plethora of challenges. A key focus of this paper is to implement the proposed planner on a physical robot and show the SLAP solution performance under uncertainty, in changing environments and in the presence of large disturbances, such as a kidnapped robot situation. | General-purpose POMDP solvers There is a strong body of literature on general purpose POMDP solvers (e.g., @cite_41 , @cite_30 , @cite_43 , @cite_67 , @cite_57 )). We divide the literature on general purpose POMDPs to two main categories: The first category is offline solvers, whose goal is to compute a policy (e.g., @cite_36 @cite_23 @cite_7 @cite_72 ). A survey paper on a class of these methods relying on point-based solvers is @cite_63 . The second category is online search algorithms. In these methods instead of a policy (action for all possible beliefs), the goal is to find the best action for the current belief using a forward search tree rooted in the current belief. @cite_3 surveys recent methods in this category. In recent years, general-purpose online solvers have become faster and more efficient. AEMS @cite_35 , DESPOT @cite_55 , ABT @cite_68 , and POMCP @cite_71 are among the most successful methods in this category. Direct application of these methods to SLAP-like problems is a challenge due to Expensive simulation steps, continuous, high-dimensional spaces, difficult tree pruning steps. We discuss these tree challenges in the following paragraphs. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_67",
"@cite_7",
"@cite_41",
"@cite_36",
"@cite_55",
"@cite_3",
"@cite_57",
"@cite_43",
"@cite_72",
"@cite_23",
"@cite_63",
"@cite_71",
"@cite_68"
],
"mid": [
"2214249",
"1498235675",
"2134802714",
"1636455720",
"",
"1532688806",
"2124595631",
"2144913588",
"2010780110",
"2171212676",
"2099430963",
"2158907787",
"2055921164",
"2171084228",
"2339179100"
],
"abstract": [
"Partially observable Markov decision processes (POMDPs) have been successfully applied to various robot motion planning tasks under uncertainty. However, most existing POMDP algorithms assume a discrete state space, while the natural state space of a robot is often continuous. This paper presents Monte Carlo Value Iteration (MCVI) for continuous-state POMDPs. MCVI samples both a robot’s state space and the corresponding belief space, and avoids inefficient a priori discretization of the state space as a grid. Both theoretical results and preliminary experimental results indicate that MCVI is a promising new approach for robot motion planning under uncertainty.",
"Solving large Partially Observable Markov Decision Processes (POMDPs) is a complex task which is often intractable. A lot of effort has been made to develop approximate offline algorithms to solve ever larger POMDPs. However, even state-of-the-art approaches fail to solve large POMDPs in reasonable time. Recent developments in online POMDP search suggest that combining offline computations with online computations is often more efficient and can also considerably reduce the error made by approximate policies computed offline. In the same vein, we propose a new anytime online search algorithm which seeks to minimize, as efficiently as possible, the error made by an approximate value function computed offline. In addition, we show how previous online computations can be reused in following time steps in order to prevent redundant computations. Our preliminary results indicate that our approach is able to tackle large state space and observation space efficiently and under real-time constraints.",
"We present a novel POMDP planning algorithm called heuristic search value iteration (HSVI). HSVI is an anytime algorithm that returns a policy and a provable bound on its regret with respect to the optimal policy. HSVI gets its power by combining two well-known techniques: attention-focusing search heuristics and piecewise linear convex representations of the value function. HSVI's soundness and convergence have been proven. On some bench-mark problems from the literature, HSVI displays speedups of greater than 100 with respect to other state-of-the-art POMDP value iteration algorithms. We also apply HSVI to a new rover exploration problem 10 times larger than most POMDP problems in the literature.",
"Existing complexity bounds for point-based POMDP value iteration algorithms focus either on the curse of dimensionality or the curse of history. We derive a new bound that relies on both and uses the concept of discounted reachability; our conclusions may help guide future algorithm design. We also discuss recent improvements to our (point-based) heuristic search value iteration algorithm. Our new implementation calculates tighter initial bounds, avoids solving linear programs, and makes more effective use of sparsity.",
"",
"This paper introduces the Point-Based Value Iteration (PBVI) algorithm for POMDP planning. PBVI approximates an exact value iteration solution by selecting a small set of representative belief points and then tracking the value and its derivative for those points only. By using stochastic trajectories to choose belief points, and by maintaining only one value hyper-plane per point, PBVI successfully solves large problems: we present results on a robotic laser tag problem as well as three test domains from the literature.",
"POMDPs provide a principled framework for planning under uncertainty, but are computationally intractable, due to the \"curse of dimensionality\" and the \"curse of history\". This paper presents an online POMDP algorithm that alleviates these difficulties by focusing the search on a set of randomly sampled scenarios. A Determinized Sparse Partially Observable Tree (DESPOT) compactly captures the execution of all policies on these scenarios. Our Regularized DESPOT (R-DESPOT) algorithm searches the DESPOT for a policy, while optimally balancing the size of the policy and its estimated value obtained under the sampled scenarios. We give an output-sensitive performance bound for all policies derived from a DESPOT, and show that R-DESPOT works well if a small optimal policy exists. We also give an anytime algorithm that approximates R-DESPOT. Experiments show strong results, compared with two of the fastest online POMDP algorithms. Source code along with experimental settings are available at http: bigbird.comp.nus.edu.sg pmwiki farm appl .",
"Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently.",
"Partially observable Markov decision processes (POMDPs) provide a principled, general framework for robot motion planning in uncertain and dynamic environments. They have been applied to various robotic tasks. However, solving POMDPs exactly is computationally intractable. A major challenge is to scale up POMDP algorithms for complex robotic tasks. Robotic systems often have mixed observability : even when a robotâs state is not fully observable, some components of the state may still be so. We use a factored model to represent separately the fully and partially observable components of a robotâs state and derive a compact lower-dimensional representation of its belief space. This factored representation can be combined with any point-based algorithm to compute approximate POMDP solutions. Experimental results show that on standard test problems, our approach improves the performance of a leading point-based POMDP algorithm by many times.",
"This paper focuses on a continuous-time, continuous-space formulation of the stochastic optimal control problem with nonlinear dynamics and observation noise. We lay the mathematical foundations to construct, via incremental sampling, an approximating sequence of discrete-time finite-state partially observable Markov decision processes (POMDPs), such that the behavior of successive approximations converges to the behavior of the original continuous system in an appropriate sense. We also show that the optimal cost function and control policies for these POMDP approximations converge almost surely to their counterparts for the underlying continuous system in the limit. We demonstrate this approach on two popular continuous-time problems, viz., the Linear-Quadratic-Gaussian (LQG) control problem and the light-dark domain problem.",
"IN Proc. Robotics: Science & Systems, 2008 Abstract—Motion planning in uncertain and dynamic environ- ments is an essential capability for autonomous robots. Partially observable Markov decision processes (POMDPs) provide a principled mathematical framework for solving such problems, but they are often avoided in robotics due to high computational complexity. Our goal is to create practical POMDP algorithms and software for common robotic tasks. To this end, we have developed a new point-based POMDP algorithm that exploits the notion of optimally reachable belief spaces to improve com- putational efficiency. In simulation, we successfully applied the algorithm to a set of common robotic tasks, including instances of coastal navigation, grasping, mobile robot exploration, and target tracking, all modeled as POMDPs with a large number of states. In most of the instances studied, our algorithm substantially outperformed one of the fastest existing point-based algorithms. A software package implementing our algorithm will soon be released at http: motion.comp.nus.edu.sg projects pomdp pomdp.html.",
"Partially observable Markov decision processes (POMDPs) form an attractive and principled framework for agent planning under uncertainty. Point-based approximate techniques for POMDPs compute a policy based on a finite set of points collected in advance from the agent's belief space. We present a randomized point-based value iteration algorithm called PERSEUS. The algorithm performs approximate value backup stages, ensuring that in each backup stage the value of each point in the belief set is improved; the key observation is that a single backup may improve the value of many belief points. Contrary to other point-based methods, PERSEUS backs up only a (randomly selected) subset of points in the belief set, sufficient for improving the value of each belief point in the set. We show how the same idea can be extended to dealing with continuous action spaces. Experimental results show the potential of PERSEUS in large scale POMDP problems.",
"The past decade has seen a significant breakthrough in research on solving partially observable Markov decision processes (POMDPs). Where past solvers could not scale beyond perhaps a dozen states, modern solvers can handle complex domains with many thousands of states. This breakthrough was mainly due to the idea of restricting value function computations to a finite subset of the belief space, permitting only local value updates for this subset. This approach, known as point-based value iteration, avoids the exponential growth of the value function, and is thus applicable for domains with longer horizons, even with relatively large state spaces. Many extensions were suggested to this basic idea, focusing on various aspects of the algorithm--mainly the selection of the belief space subset, and the order of value function updates. In this survey, we walk the reader through the fundamentals of point-based value iteration, explaining the main concepts and ideas. Then, we survey the major extensions to the basic algorithm, discussing their merits. Finally, we include an extensive empirical analysis using well known benchmarks, in order to shed light on the strengths and limitations of the various approaches.",
"This paper introduces a Monte-Carlo algorithm for online planning in large POMDPs. The algorithm combines a Monte-Carlo update of the agent's belief state with a Monte-Carlo tree search from the current belief state. The new algorithm, POMCP, has two important properties. First, Monte-Carlo sampling is used to break the curse of dimensionality both during belief state updates and during planning. Second, only a black box simulator of the POMDP is required, rather than explicit probability distributions. These properties enable POMCP to plan effectively in significantly larger POMDPs than has previously been possible. We demonstrate its effectiveness in three large POMDPs. We scale up a well-known benchmark problem, rocksample, by several orders of magnitude. We also introduce two challenging new POMDPs: 10 x 10 battleship and partially observable PacMan, with approximately 1018 and 1056 states respectively. Our Monte-Carlo planning algorithm achieved a high level of performance with no prior knowledge, and was also able to exploit simple domain knowledge to achieve better results with less search. POMCP is the first general purpose planner to achieve high performance in such large and unfactored POMDPs.",
"Motion planning under uncertainty is important for reliable robot operations in uncertain and dynamic environments. Partially Observable Markov Decision Process (POMDP) is a general and systematic framework for motion planning under uncertainty. To cope with dynamic environment well, we often need to modify the POMDP model during runtime. However, despite recent tremendous advances in POMDP planning, most solvers are not fast enough to generate a good solution when the POMDP model changes during runtime. Recent progress in online POMDP solvers have shown promising results. However, most online solvers are based on replanning, which recompute a solution from scratch at each step, discarding any solution that has been computed so far, and hence wasting valuable computational resources. In this paper, we propose a new online POMDP solver, called Adaptive Belief Tree (ABT), that can reuse and improve existing solution, and update the solution as needed whenever the POMDP model changes. Given enough time, ABT converges to the optimal solution of the current POMDP model in probability. Preliminary results on three distinct robotics tasks in dynamic environments are promising. In all test scenarios, ABT generates similar or better solutions faster than the fastest online POMDP solver today; using an average of less than 50 ms of computation time per step."
]
} |
1510.07380 | 2802303374 | Simultaneous localization and Planning (SLAP) is a crucial ability for an autonomous robot operating under uncertainty. In its most general form, SLAP induces a continuous POMDP (partially-observable Markov decision process), which needs to be repeatedly solved online. This paper addresses this problem and proposes a dynamic replanning scheme in belief space. The underlying POMDP, which is continuous in state, action, and observation space, is approximated offline via sampling-based methods, but operates in a replanning loop online to admit local improvements to the coarse offline policy. This construct enables the proposed method to combat changing environments and large localization errors, even when the change alters the homotopy class of the optimal trajectory. It further outperforms the state-of-the-art FIRM (Feedback-based Information RoadMap) method by eliminating unnecessary stabilization steps. Applying belief space planning to physical systems brings with it a plethora of challenges. A key focus of this paper is to implement the proposed planner on a physical robot and show the SLAP solution performance under uncertainty, in changing environments and in the presence of large disturbances, such as a kidnapped robot situation. | Majority of above-mentioned methods rely on simulating the POMDP model forward in time and create a tree of possible scenarios in future. At each simulation step @math , the simulator @math simulates taking action @math at state @math and computes the next state @math , observation @math , and the cost and constraints of this transition. When dealing with Games (e.g., Go) or traditional POMDP problems (e.g., RockSample @cite_3 ), the forward simulation step and cost computation is computationally very inexpensive. However, in SLAP-like problems computing costs are typically much more expensive. An important example cost is a boolean value that determines if the system had collided with obstacles or not during this transition. This a very expensive computation in robotics applications, that restricts the application of tree-based methods to cases where computing collision probabilities is needed. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2144913588"
],
"abstract": [
"Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently."
]
} |
1510.07380 | 2802303374 | Simultaneous localization and Planning (SLAP) is a crucial ability for an autonomous robot operating under uncertainty. In its most general form, SLAP induces a continuous POMDP (partially-observable Markov decision process), which needs to be repeatedly solved online. This paper addresses this problem and proposes a dynamic replanning scheme in belief space. The underlying POMDP, which is continuous in state, action, and observation space, is approximated offline via sampling-based methods, but operates in a replanning loop online to admit local improvements to the coarse offline policy. This construct enables the proposed method to combat changing environments and large localization errors, even when the change alters the homotopy class of the optimal trajectory. It further outperforms the state-of-the-art FIRM (Feedback-based Information RoadMap) method by eliminating unnecessary stabilization steps. Applying belief space planning to physical systems brings with it a plethora of challenges. A key focus of this paper is to implement the proposed planner on a physical robot and show the SLAP solution performance under uncertainty, in changing environments and in the presence of large disturbances, such as a kidnapped robot situation. | Continuous Gaussian POMDP solvers A different class of methods restrict the form of uncertainty to Gaussian and extend the traditional deterministic motion planning methods (e.g., PRM and RRT) to belief space. Examples include @cite_13 @cite_10 , @cite_8 . More recently, @cite_26 proposes an efficient approximate value iteration method. Starting from an initial solution (trajectory in belief space), it converges to the closest local minimum to the initial solution. These methods are single query (the solution is valid for a given initial belief), thus in case of replanning from a new belief most of the computation needs to be redone. Replanning becomes more challenging when the planner has to switch the plan from one homotopy class to another. | {
"cite_N": [
"@cite_10",
"@cite_26",
"@cite_13",
"@cite_8"
],
"mid": [
"2002201291",
"2399169217",
"2289207600",
"2103052138"
],
"abstract": [
"In this paper we present LQG-MP (linear-quadratic Gaussian motion planning), a new approach to robot motion planning that takes into account the sensors and the controller that will be used during the execution of the robotâs path. LQG-MP is based on the linear-quadratic controller with Gaussian models of uncertainty, and explicitly characterizes in advance (i.e. before execution) the a priori probability distributions of the state of the robot along its path. These distributions can be used to assess the quality of the path, for instance by computing the probability of avoiding collisions. Many methods can be used to generate the required ensemble of candidate paths from which the best path is selected; in this paper we report results using rapidly exploring random trees (RRT). We study the performance of LQG-MP with simulation experiments in three scenarios: (A) a kinodynamic car-like robot, (B) multi-robot planning with differential-drive robots, and (C) a 6-DOF serial manipulator. We also present a method that applies Kalman smoothing to make paths Ck-continuous and apply LQG-MP to precomputed roadmaps using a variant of Dijkstraâs algorithm to efficiently find high-quality paths.",
"We introduce a highly efficient method for solving continuous partially-observable Markov decision processes (POMDPs) in which beliefs can be modeled using Gaussian distributions over the state space. Our method enables fast solutions to sequential decision making under uncertainty for a variety of problems involving noisy or incomplete observations and stochastic actions. We present an efficient approach to compute locally-valid approximations to the value function over continuous spaces in time polynomial (O[n4]) in the dimension n of the state space. To directly tackle the intractability of solving general POMDPs, we leverage the assumption that beliefs are Gaussian distributions over the state space, approximate the belief update using an extended Kalman filter (EKF), and represent the value function by a function that is quadratic in the mean and linear in the variance of the belief. Our approach iterates towards a linear control policy over the state space that is locally-optimal with respect to a user defined cost function, and is approximately valid in the vicinity of a nominal trajectory through belief space. We demonstrate the scalability and potential of our approach on problems inspired by robot navigation under uncertainty for state spaces of up to 128 dimensions.",
"Many methods for planning under uncertainty operate in the belief space, i.e., the set of probability distributions over states. Although the problem is computationally hard, recent advances have shown that belief-space planning is becoming practical for many medium size problems. Some of the most successful methods utilize sampling and often rely on distances between beliefs to partially guide the search process. This paper deals with the question of what is a suitable distance function for belief space planning, which despite its importance remains unanswered. This work indicates that the rarely used Wasserstein distance (also known as Earth Mover’s Distance ( ( EMD ))) is a more suitable metric than the commonly used ( L1 ) and Kullback–Leibler ( ( KL )) for belief-space planning. Simulation results on Non-Observable Markov Decision Problems, i.e., the simplest class of belief-space planning, indicate that as the problem becomes more complex, the differences on the effectiveness of different distance functions become quite prominent. In fact, in state spaces with more than 4 dimensions, by just replacing ( L1 ) or ( KL ) distance with ( EMD ), the problems become from virtually unsolvable to solvable within a reasonable time frame. Furthermore, preliminary results on Partially Observable Markov Decision Processes indicate that point-based solvers with ( EMD ) use a smaller number of samples to generate policies with similar qualities, compared to those with ( L1 ) and ( KL ). This paper also shows that ( EMD ) carries the Lipschitz continuity of the state’s cost function to Lipschitz continuity of the expected cost of the beliefs. Such a continuity property is often critical for convergence to optimal solutions.",
"In this paper we address the problem of motion planning in the presence of state uncertainty, also known as planning in belief space. The work is motivated by planning domains involving nontrivial dynamics, spatially varying measurement properties, and obstacle constraints. To make the problem tractable, we restrict the motion plan to a nominal trajectory stabilized with a linear estimator and controller. This allows us to predict distributions over future states given a candidate nominal trajectory. Using these distributions to ensure a bounded probability of collision, the algorithm incrementally constructs a graph of trajectories through state space, while efficiently searching over candidate paths through the graph at each iteration. This process results in a search tree in belief space that provably converges to the optimal path. We analyze the algorithm theoretically and also provide simulation results demonstrating its utility for balancing information gathering to reduce uncertainty and finding low cost paths."
]
} |
1510.07471 | 1868214737 | The target of @math -armed bandit problem is to find the global maximum of an unknown stochastic function @math , given a finite budget of @math evaluations. Recently, @math -armed bandits have been widely used in many situations. Many of these applications need to deal with large-scale data sets. To deal with these large-scale data sets, we study a distributed setting of @math -armed bandits, where @math players collaborate to find the maximum of the unknown function. We develop a novel anytime distributed @math -armed bandit algorithm. Compared with prior work on @math -armed bandits, our algorithm uses a quite different searching strategy so as to fit distributed learning scenarios. Our theoretical analysis shows that our distributed algorithm is @math times faster than the classical single-player algorithm. Moreover, the number of communication rounds of our algorithm is only logarithmic in @math . The numerical results show that our method can make effective use of every players to minimize the loss. Thus, our distributed approach is attractive and useful. | The @math -armed bandit problem has been recently intensively studied by many researchers. proposed the Zooming algorithm for solving @math -armed bandits in which payoff function satisfies a Lipschitz condition. Then, presented a tree-based optimization method called HOO. In contrast with the Zooming algorithm, HOO is an anytime algorithm and able to deal with high order smoothness of the objective function. HOO constructs a covering tree to explore the arm space. This structure is also used in many other algorithms in @math -armed bandits, such as the StoSOO algorithm @cite_15 and the HCT algorithm @cite_10 . However, all of these methods are not applicable in the distributed framework. | {
"cite_N": [
"@cite_15",
"@cite_10"
],
"mid": [
"57706852",
"2952514467"
],
"abstract": [
"We study the problem of global maximization of a function f given a finite number of evaluations perturbed by noise. We consider a very weak assumption on the function, namely that it is locally smooth (in some precise sense) with respect to some semi-metric, around one of its global maxima. Compared to previous works on bandits in general spaces (, 2008; , 2011a) our algorithm does not require the knowledge of this semi-metric. Our algorithm, StoSOO, follows an optimistic strategy to iteratively construct upper confidence bounds over the hierarchical partitions of the function domain to decide which point to sample next. A finite-time analysis of StoSOO shows that it performs almost as well as the best specifically-tuned algorithms even though the local smoothness of the function is not known.",
"In this paper we consider the problem of online stochastic optimization of a locally smooth function under bandit feedback. We introduce the high-confidence tree (HCT) algorithm, a novel any-time @math -armed bandit algorithm, and derive regret bounds matching the performance of existing state-of-the-art in terms of dependency on number of steps and smoothness factor. The main advantage of HCT is that it handles the challenging case of correlated rewards, whereas existing methods require that the reward-generating process of each arm is an identically and independent distributed (iid) random process. HCT also improves on the state-of-the-art in terms of its memory requirement as well as requiring a weaker smoothness assumption on the mean-reward function in compare to the previous anytime algorithms. Finally, we discuss how HCT can be applied to the problem of policy search in reinforcement learning and we report preliminary empirical results."
]
} |
1510.07471 | 1868214737 | The target of @math -armed bandit problem is to find the global maximum of an unknown stochastic function @math , given a finite budget of @math evaluations. Recently, @math -armed bandits have been widely used in many situations. Many of these applications need to deal with large-scale data sets. To deal with these large-scale data sets, we study a distributed setting of @math -armed bandits, where @math players collaborate to find the maximum of the unknown function. We develop a novel anytime distributed @math -armed bandit algorithm. Compared with prior work on @math -armed bandits, our algorithm uses a quite different searching strategy so as to fit distributed learning scenarios. Our theoretical analysis shows that our distributed algorithm is @math times faster than the classical single-player algorithm. Moreover, the number of communication rounds of our algorithm is only logarithmic in @math . The numerical results show that our method can make effective use of every players to minimize the loss. Thus, our distributed approach is attractive and useful. | On the other hand, distributed stochastic optimization @cite_4 @cite_1 @cite_9 and distributed PAC models @cite_17 @cite_14 @cite_12 have also been intensively studied. Among them, proposed the distributed algorithm for multi-armed bandits. They studied two kinds of distributed algorithms for multi-armed bandits and discussed the tradeoff between the learning performance and the cost of communication. Inspired by their idea, we propose the distributed algorithm for the @math -armed bandit problem. In contrast to 's study in which the arm space is finite and discrete, our method is the first to study distributed @math -armed bandit problem in which the arm space could be any measurable space. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_9",
"@cite_1",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2952033860",
"2949469249",
"2130062883",
"",
"2151390636"
],
"abstract": [
"",
"We analyze the convergence of gradient-based optimization algorithms that base their updates on delayed stochastic gradient information. The main application of our results is to the development of gradient-based distributed optimization algorithms where a master node performs parameter updates while worker nodes compute stochastic gradients based on local information in parallel, which may give rise to delays due to asynchrony. We take motivation from statistical problems where the size of the data is so large that it cannot fit on one computer; with the advent of huge datasets in biology, astronomy, and the internet, such problems are now common. Our main contribution is to show that for smooth stochastic problems, the delays are asymptotically negligible and we can achieve order-optimal convergence results. In application to distributed optimization, we develop procedures that overcome communication bottlenecks and synchronization requirements. We show @math -node architectures whose optimization error in stochastic problems---in spite of asynchronous delays---scales asymptotically as @math after @math iterations. This rate is known to be optimal for a distributed system with @math nodes even in the absence of delays. We additionally complement our theoretical results with numerical experiments on a statistical machine learning task.",
"We study exploration in Multi-Armed Bandits in a setting where @math players collaborate in order to identify an @math -optimal arm. Our motivation comes from recent employment of bandit algorithms in computationally intensive, large-scale applications. Our results demonstrate a non-trivial tradeoff between the number of arm pulls required by each of the players, and the amount of communication between them. In particular, our main result shows that by allowing the @math players to communicate only once, they are able to learn @math times faster than a single player. That is, distributing learning to @math players gives rise to a factor @math parallel speed-up. We complement this result with a lower bound showing this is in general the best possible. On the other extreme, we present an algorithm that achieves the ideal factor @math speed-up in learning performance, with communication only logarithmic in @math .",
"Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the distributed mini-batch algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. We show how our method can be used to solve the closely-related distributed stochastic optimization problem, achieving an asymptotically linear speed-up over multiple processors. Finally, we demonstrate the merits of our approach on a web-scale online prediction problem.",
"",
"We consider the problem of learning classifiers for labeled data that has been distributed across several nodes. Our goal is to find a single classifier, with small approximation error, across all datasets while minimizing the communication between nodes. This setting models real-world communication bottlenecks in the processing of massive distributed datasets. We present several very general sampling-based solutions as well as some two-way protocols which have a provable exponential speed-up over any one-way protocol. We focus on core problems for noiseless data distributed across two or more nodes. The techniques we introduce are reminiscent of active learning, but rather than actively probing labels, nodes actively communicate with each other, each node simultaneously learning the important data from another node."
]
} |
1510.07408 | 2236528742 | Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. | There has been a number of papers on verification addressing different aspects. With no aim to give a complete list we give here a brief description of some related works. Aharonov, Ben-Or and Eban @cite_17 provided the first verification protocol. It requires a linear overhead in the size of the computation, but also requires a verifier that has involved quantum abilities, and in particular that can prepare entangled states of size that depends on the security parameter. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2145860679"
],
"abstract": [
"The widely held belief that BQP strictly contains BPP raises fundamental questions: Upcoming generations of quantum computers might already be too large to be simulated classically. Is it possible to experimentally test that the se systems perform as they should, if we cannot efficiently comp ute predictions for their behavior? Vazirani has asked [Vaz07]: If computing predictions for Quantum Mechanics requires exponential resources, is Quantum Mechanics a falsifiable theory? In cryptographic settings, an untruste d future company wants to sell a quantum computer or perform a delegated quantum computation. Can the customer be convinced of correctness without the ability to compare results to predictions? To provide answers to these questions, we define Quantum Prov er Interactive Proofs (QPIP). Whereas in standard Interactive Proofs [GMR85] the prover is computationally unbounded, here our prover is in BQP, representing a quantum computer. The verifier models our current computati onal capabilities: it is a BPP machine, with access to few qubits. Our main theorem can be roughly stated as: ”Any language in BQP has a QPIP, and moreover, a fault tolerant one”. We provide two proofs. The simpler one uses a new (possibly of independent interest) quantum authentication scheme (QAS) based on random Clifford elements. This QPIP however, is not fault tolerant. Our � �"
]
} |
1510.07408 | 2236528742 | Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. | Following another approach, based on measurement-based quantum computation, Fitzsimons and Kashefi @cite_23 obtained the most optimal scheme from the point of view of verifier’s capability. However, the overhead of the full scheme becomes quadratic. Recently a solution for addressing this issue was proposed in @cite_16 by combining the above two protocols @cite_23 and @cite_17 in order to construct a hybrid scheme. This was the only verification protocol (before our work) that requires linear number of qubits while in the same time requires that the verifier has the minimal quantum property of preparing single quantum systems. However, the protocol requires the preparation of qudits (rather than qubits) where the dimension is dictated by the desired level of security. Moreover the required resource is still constructed based upon the dotted-complete graph state though of small constant size. Hence further investigation is required for establishing the experimental simplicity of the two schemes, ours and the one in @cite_16 . | {
"cite_N": [
"@cite_16",
"@cite_23",
"@cite_17"
],
"mid": [
"1140067046",
"58322429",
"2145860679"
],
"abstract": [
"In the absence of any efficient classical schemes for verifying a universal quantum computer, the importance of limiting the required quantum resources for this task has been highlighted recently. Currently, most of efficient quantum verification protocols are based on cryptographic techniques where an almost classical verifier executes her desired encrypted quantum computation remotely on an untrusted quantum prover. In this work we present a new protocol for quantum verification by incorporating existing techniques in a non-standard composition to reduce the required quantum communications between the verifier and the prover.",
"Blind quantum computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's input, output, and computation remain private. A desirable property for any BQC protocol is verification, whereby the client can verify with high probability whether the server has followed the instructions of the protocol or if there has been some deviation resulting in a corrupted output state. A verifiable BQC protocol can be viewed as an interactive proof system leading to consequences for complexity theory. We previously proposed [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science, Atlanta, 2009 (IEEE, Piscataway, 2009), p. 517] a universal and unconditionally secure BQC scheme where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. In this paper we extend that protocol with additional functionality allowing blind computational basis measurements, which we use to construct another verifiable BQC protocol based on a different class of resource states. We rigorously prove that the probability of failing to detect an incorrect output is exponentially small in a security parameter, while resource overhead remains polynomial in this parameter. This resource state allows entangling gates to be performed between arbitrary pairs of logical qubits with only constant overhead. This is a significant improvement on the original scheme, which required that all computations to be performed must first be put into a nearest-neighbor form, incurring linear overhead in the number of qubits. Such an improvement has important consequences for efficiency and fault-tolerance thresholds.",
"The widely held belief that BQP strictly contains BPP raises fundamental questions: Upcoming generations of quantum computers might already be too large to be simulated classically. Is it possible to experimentally test that the se systems perform as they should, if we cannot efficiently comp ute predictions for their behavior? Vazirani has asked [Vaz07]: If computing predictions for Quantum Mechanics requires exponential resources, is Quantum Mechanics a falsifiable theory? In cryptographic settings, an untruste d future company wants to sell a quantum computer or perform a delegated quantum computation. Can the customer be convinced of correctness without the ability to compare results to predictions? To provide answers to these questions, we define Quantum Prov er Interactive Proofs (QPIP). Whereas in standard Interactive Proofs [GMR85] the prover is computationally unbounded, here our prover is in BQP, representing a quantum computer. The verifier models our current computati onal capabilities: it is a BPP machine, with access to few qubits. Our main theorem can be roughly stated as: ”Any language in BQP has a QPIP, and moreover, a fault tolerant one”. We provide two proofs. The simpler one uses a new (possibly of independent interest) quantum authentication scheme (QAS) based on random Clifford elements. This QPIP however, is not fault tolerant. Our � �"
]
} |
1510.07408 | 2236528742 | Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. | The first experimental implementation of a simplified verification protocol was presented in @cite_14 where a repetition technique was explored as well. Other experiments on verifiable protocols include @cite_6 and an experiment based on the protocol in @cite_2 . However, none of these works are applicable to a full universal scheme like ours. | {
"cite_N": [
"@cite_14",
"@cite_6",
"@cite_2"
],
"mid": [
"2011783043",
"2157305391",
"2078180782"
],
"abstract": [
"Can Alice verify the result of a quantum computation that she has delegated to Bob without using a quantum computer? Now she can. A protocol for testing a quantum computer using minimum quantum resources has been proposed and demonstrated.",
"Quantum communication schemes rely on cryptographically secure quantum resources to distribute private information. Here, the authors show that graph states—nonlocal states based on networks of qubits—can be exploited to implement quantum secret sharing of quantum and classical information.",
"Future quantum information networks will likely consist of quantum and classical agents, who have the ability to communicate in a variety of ways with trusted and untrusted parties and securely delegate computational tasks to untrusted large-scale quantum computing servers. Multipartite quantum entanglement is a fundamental resource for such a network and hence it is imperative to study the possibility of verifying a multipartite entanglement source in a way that is efficient and provides strong guarantees even in the presence of multiple dishonest parties. In this work, we show how an agent of a quantum network can perform a distributed verification of a multipartite entangled source with minimal resources, which is, nevertheless, resistant against any number of dishonest parties. Moreover, we provide a tight tradeoff between the level of security and the distance between the state produced by the source and the ideal maximally entangled state. Last, by adding the resource of a trusted common random source, we can further provide security guarantees for all honest parties in the quantum network simultaneously."
]
} |
1510.07408 | 2236528742 | Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. | On the other hand to achieve a classical verifier new techniques are proposed at the cost of increasing the overall overhead of the protocol dramatically @cite_9 or increasing the number of the provers @cite_15 . Other device-independent protocols @cite_8 @cite_20 used a single universal quantum prover and an untrusted qubit measuring device and while the complexity improved it is still far from experimentally realisable. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_20",
"@cite_15"
],
"mid": [
"2128966718",
"2048118641",
"139186958",
"2962840832"
],
"abstract": [
"Recent advances in theoretical and experimental quantum computing bring us closer to scalable quantum computing devices. This makes the need for protocols that verify the correct functionality of quantum operations timely and has led to the field of quantum verification. In this paper we address key challenges to make quantum verification protocols applicable to experimental implementations. We prove the robustness of the single server verifiable universal blind quantum computing protocol of Fitzsimons and Kashefi (2012 arXiv:1203.5217) in the most general scenario. This includes the case where the purification of the deviated input state is in the hands of an adversarial server. The proved robustness property allows the composition of this protocol with a device-independent state tomography protocol that we give, which is based on the rigidity of CHSH games as proposed by (2013 Nature 496 456–60). The resulting composite protocol has lower round complexity for the verification of entangled quantum servers with a classical verifier and, as we show, can be made fault tolerant.",
"A scheme is described that enables characterization and classical command of large quantum systems; it provides a test of whether a claimed quantum computer is truly quantum, and also advances towards a goal of quantum cryptography, namely the use of untrusted devices to establish a shared random key, with security based on the validity of quantum physics.",
"As progress on experimental quantum processors continues to advance, the problem of verifying the correct operation of such devices is becoming a pressing concern. The recent discovery of protocols for verifying computation performed by entangled but non-communicating quantum processors holds the promise of certifying the correctness of arbitrary quantum computations in a fully device-independent manner. Unfortunately, all known schemes have prohibitive overhead, with resources scaling as extremely high degree polynomials in the number of gates constituting the computation. Here we present a novel approach based on a combination of verified blind quantum computation and Bell state self-testing. This approach has dramatically reduced overhead, with resources scaling as only O(m4lnm) in the number of gates.",
"Using the measurement-based quantum computation model, we construct interactive proofs with non-communicating quantum provers and a classical verifier. Our construction gives interactive proofs for all languages in BQP with a polynomial number of quantum provers, each of which, in the honest case, performs only a single measurement. Our techniques use self-tested graph states which allow us to test the provers for honesty, establishing that they hold onto a particular graph state and measure it in specified bases. In this extended abstract we give an overview of the construction and proofs."
]
} |
1510.07408 | 2236528742 | Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. | The VBQC protocol could be generally viewed as prepare and send scheme (using the terminology from QKD). Equivalent schemes based on measurement-only could also be obtained @cite_0 @cite_13 . In this scenario the prover prepares a universal resource and sends it qubit-by-qubit to the verifier that performs different measurements in order to complete a quantum computation. These protocols are referred to as online protocols (in contrast to the offline protocols mentioned above) since the quantum operations of the verifier occur when they know what they wants to compute. The online scheme can also achieve verification either by creating traps @cite_0 , or by measuring the stabiliser of the resource state @cite_13 . These protocols could be improved using our techniques as we will comment in Section . | {
"cite_N": [
"@cite_0",
"@cite_13"
],
"mid": [
"2317428072",
"365259755"
],
"abstract": [
"Blind quantum computing is a new secure quantum computing protocol where a client who does not have any sophisticated quantum technlogy can delegate her quantum computing to a server without leaking any privacy. It is known that a client who has only a measurement device can perform blind quantum computing [T. Morimae and K. Fujii, Phys. Rev. A , 050301(R) (2013)]. It has been an open problem whether the protocol can enjoy the verification, i.e., the ability of client to check the correctness of the computing. In this paper, we propose a protocol of verification for the measurement-only blind quantum computing.",
"We introduce a simple protocol for verifiable measurement-only blind quantum computing. Alice, a client, can perform only single-qubit measurements, whereas Bob, a server, can generate and store entangled many-qubit states. Bob generates copies of a graph state, which is a universal resource state for measurement-based quantum computing, and sends Alice each qubit of them one by one. Alice adaptively measures each qubit according to her program. If Bob is honest, he generates the correct graph state, and, therefore, Alice can obtain the correct computation result. Regarding the security, whatever Bob does, Bob cannot get any information about Alice's computation because of the no-signaling principle. Furthermore, malicious Bob does not necessarily send the copies of the correct graph state, but Alice can check the correctness of Bob's state by directly verifying the stabilizers of some copies."
]
} |
1510.07408 | 2236528742 | Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. | Finally a composable definition of @cite_23 is given in @cite_5 , while a limited computational model (one-pure-qubit) is examined in @cite_3 . Due to the generic nature of our construction these results would be applicable to our protocol as well. | {
"cite_N": [
"@cite_5",
"@cite_3",
"@cite_23"
],
"mid": [
"2130618624",
"2603347384",
"58322429"
],
"abstract": [
"Delegating difficult computations to remote large computation facilities, with appropriate security guarantees, is a possible solution for the ever growing needs of personal computing power. For delegated computation protocols to be usable in a larger context – or simply to securely run two protocols in parallel – the security definitions need to be composable. Here, we define composable security for delegated quantum computation. We distinguish between protocols which provide only blindness – the computation is hidden from the server – and those that are also verifiable – the client can check that it has received the correct result. We show that the composable security definition capturing both these notions can be reduced to a combination of several distinct “trace distance type” criteria – which are, individually, non composable security definitions.",
"While building a universal quantum computer remains challenging, devices of restricted power such as the so-called one pure qubit model have attracted considerable attention. An important step in the construction of these limited quantum computational devices is the understanding of whether the verification of the computation within these models could be also performed in the restricted scheme. Encoding via blindness (a cryptographic protocol for delegated computing) has proven successful for the verification of universal quantum computation with a restricted verifier. In this paper, we present the adaptation of this approach to the one pure qubit model, and present the first feasible scheme for the verification of delegated one pure qubit model of quantum computing.",
"Blind quantum computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's input, output, and computation remain private. A desirable property for any BQC protocol is verification, whereby the client can verify with high probability whether the server has followed the instructions of the protocol or if there has been some deviation resulting in a corrupted output state. A verifiable BQC protocol can be viewed as an interactive proof system leading to consequences for complexity theory. We previously proposed [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science, Atlanta, 2009 (IEEE, Piscataway, 2009), p. 517] a universal and unconditionally secure BQC scheme where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. In this paper we extend that protocol with additional functionality allowing blind computational basis measurements, which we use to construct another verifiable BQC protocol based on a different class of resource states. We rigorously prove that the probability of failing to detect an incorrect output is exponentially small in a security parameter, while resource overhead remains polynomial in this parameter. This resource state allows entangling gates to be performed between arbitrary pairs of logical qubits with only constant overhead. This is a significant improvement on the original scheme, which required that all computations to be performed must first be put into a nearest-neighbor form, incurring linear overhead in the number of qubits. Such an improvement has important consequences for efficiency and fault-tolerance thresholds."
]
} |
1510.07293 | 2952504352 | We consider forkable regular expressions, which enrich regular expressions with a fork operator, to establish a formal basis for static and dynamic analysis of the communication behavior of concurrent programs. We define a novel compositional semantics for forkable expressions, establish their fundamental properties, and define derivatives for them as a basis for the generation of automata, for matching, and for language containment tests. Forkable expressions may give rise to non-regular languages, in general, but we identify sufficient conditions on expressions that guarantee finiteness of the automata construction via derivatives. | The latter connection is made precise by Garg and Ragunath @cite_6 , who study concurrent regular expressions (CRE), which are shuffle expressions extended with synchronous composition. They show that the class of CRE languages is equal to the class of Petri net languages. The proof requires the presence of synchronous composition. Forkable expressions do not support synchronous composition, but they are equivalent to , which are also defined by Garg and Ragunath and shown to be strictly less powerful than CREs. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2067131306"
],
"abstract": [
"Abstract We define algebraic systems called concurrent regular expressions which provide a modular description of languages of Petri nets. Concurrent regular expressions are extension of regular expressions with four operators - interleaving, interleaving closure, synchronous composition and renaming. This alternative characterization of Petri net languages gives us a flexible way of specifying concurrent systems. Concurrent regular expressions are modular and, hence, easier to use for specification. The proof of equivalence also provides a natural decomposition method for Petri nets."
]
} |
1510.07197 | 2950220747 | This paper presents a novel research problem on joint discovery of commonalities and differences between two individual documents (or document sets), called Comparative Document Analysis (CDA). Given any pair of documents from a document collection, CDA aims to automatically identify sets of quality phrases to summarize the commonalities of both documents and highlight the distinctions of each with respect to the other informatively and concisely. Our solution uses a general graph-based framework to derive novel measures on phrase semantic commonality and pairwise distinction , and guides the selection of sets of phrases by solving two joint optimization problems. We develop an iterative algorithm to integrate the maximization of phrase commonality or distinction measure with the learning of phrase-document semantic relevance in a mutually enhancing way. Experiments on text corpora from two different domains---scientific publications and news---demonstrate the effectiveness and robustness of the proposed method on comparing individual documents. Our case study on comparing news articles published at different dates shows the power of the proposed method on comparing document sets. | Multi-document summarization @cite_16 @cite_32 @cite_33 @cite_17 @cite_30 aims to generate a compressed summary to cover the consensus of information among the original documents. It is different from discovery as it focuses on providing comprehensive view of the corpus (union instead of intersection). | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_32",
"@cite_16",
"@cite_17"
],
"mid": [
"2128672521",
"2081174553",
"1975579663",
"1488425980",
"2054211469"
],
"abstract": [
"We present a multi-document summarizer, called MEAD, which generates summaries using cluster centroids produced by a topic detection and tracking system. We also describe two new techniques, based on sentence utility and subsumption, which we have applied to the evaluation of both single and multiple document summaries. Finally, we describe two user studies that test our models of multi-document summarization.",
"NeATS is a multi-document summarization system that attempts to extract relevant or interesting portions from a set of documents about some topic and present them in coherent order. NeATS is among the best performers in the large scale summarization evaluation DUC 2001.",
"We present an exploration of generative probabilistic models for multi-document summarization. Beginning with a simple word frequency based model (Nenkova and Vanderwende, 2005), we construct a sequence of models each injecting more structure into the representation of document set content and exhibiting ROUGE gains along the way. Our final model, HierSum, utilizes a hierarchical LDA-style model (, 2004) to represent content specificity as a hierarchy of topic vocabulary distributions. At the task of producing generic DUC-style summaries, HierSum yields state-of-the-art ROUGE performance and in pairwise user evaluation strongly outperforms (2007)'s state-of-the-art discriminative system. We also explore HierSum's capacity to produce multiple 'topical summaries' in order to facilitate content discovery and navigation.",
"Multi-document summarization has been an important problem in information retrieval. It aims to distill the most important information from a set of documents to generate a compressed summary. Given a sentence graph generated from a set of documents where vertices represent sentences and edges indicate that the corresponding vertices are similar, the extracted summary can be described using the idea of graph domination. In this paper, we propose a new principled and versatile framework for multi-document summarization using the minimum dominating set. We show that four well-known summarization tasks including generic, query-focused, update, and comparative summarization can be modeled as different variations derived from the proposed framework. Approximation algorithms for performing summarization are also proposed and empirical experiments are conducted to demonstrate the effectiveness of our proposed framework.",
"A sentence extract summary of a document is a subset of the document's sentences that contains the main ideas in the document. We present an approach to generating such summaries, a hidden Markov model that judges the likelihood that each sentence should be contained in the summary. We compare the results of this method with summaries generated by humans, showing that we obtain significantly higher agreement than do earlier methods."
]
} |
1510.07197 | 2950220747 | This paper presents a novel research problem on joint discovery of commonalities and differences between two individual documents (or document sets), called Comparative Document Analysis (CDA). Given any pair of documents from a document collection, CDA aims to automatically identify sets of quality phrases to summarize the commonalities of both documents and highlight the distinctions of each with respect to the other informatively and concisely. Our solution uses a general graph-based framework to derive novel measures on phrase semantic commonality and pairwise distinction , and guides the selection of sets of phrases by solving two joint optimization problems. We develop an iterative algorithm to integrate the maximization of phrase commonality or distinction measure with the learning of phrase-document semantic relevance in a mutually enhancing way. Experiments on text corpora from two different domains---scientific publications and news---demonstrate the effectiveness and robustness of the proposed method on comparing individual documents. Our case study on comparing news articles published at different dates shows the power of the proposed method on comparing document sets. | Comparative document summarization @cite_7 @cite_15 @cite_2 @cite_10 @cite_23 @cite_24 focuses on the aspect---it summarizes the differences between comparable document sets by extracting the discriminative sentences from each set. Existing work formalizes the problem of discriminative sentence selection into different forms, including integer linear programming @cite_2 , multivariate normal model estimation @cite_10 , and group-related centrality estimation @cite_28 . Going beyond selecting discriminative sentences from a document set, our problem aims to select quality and concise phrases to highlight not only differences but also commonalities between two documents. | {
"cite_N": [
"@cite_7",
"@cite_28",
"@cite_24",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_10"
],
"mid": [
"2402992320",
"2951791621",
"2139928100",
"2152209321",
"",
"1896326898",
"2021579266"
],
"abstract": [
"Patent document retrieval, as a recall-orientated search task, does not allow missing relevant patent documents due to the great commercial value of patents and significant costs of processing a patent application or patent infringement case. Thus, it is important to retrieve all possible relevant documents rather than only a small subset of patents from the top ranked results. However, patents are often lengthy and rich in technical terms, and it often requires enormous human efforts to compare a given document with retrieved results. In this paper, we formulate the problem of comparing patent documents as a comparative summarization problem, and explore automatic strategies that generate comparative summaries to assist patent analysts in quickly reviewing any given patent document pairs. To this end, we present a novel approach, named PatentCom, which first extracts discriminative terms from each patent document, and then connects the dots on a term co-occurrence graph. In this way, we are able to comprehensively extract the gists of the two patent documents being compared, and meanwhile highlight their relationship in terms of commonalities and differences. Extensive quantitative analysis and case studies on real world patent documents demonstrate the effectiveness of our proposed approach.",
"We describe a new method for summarizing similarities and differences in a pair of related documents using a graph representation for text. Concepts denoted by words, phrases, and proper names in the document are represented positionally as nodes in the graph along with edges corresponding to semantic relations between items. Given a perspective in terms of which the pair of documents is to be summarized, the algorithm first uses a spreading activation technique to discover, in each document, nodes semantically related to the topic. The activated graphs of each document are then matched to yield a graph corresponding to similarities and differences between the pair, which is rendered in natural language. An evaluation of these techniques has been carried out.",
"Comparative News Summarization aims to highlight the commonalities and differences between two comparable news topics. In this study, we propose a novel approach to generating comparative news summaries. We formulate the task as an optimization problem of selecting proper sentences to maximize the comparativeness within the summary and the representativeness to both news topics. We consider semantic-related cross-topic concept pairs as comparative evidences, and consider topic-related concepts as representative evidences. The optimization problem is addressed by using a linear programming model. The experimental results demonstrate the effectiveness of our proposed model.",
"The distribution difference among multiple domains has been exploited for cross-domain text categorization in recent years. Along this line, we show two new observations in this study. First, the data distribution difference is often due to the fact that different domains use different index words to express the same concept. Second, the association between the conceptual feature and the document class can be stable across domains. These two observations actually indicate the distinction and commonality across domains. Inspired by the above observations, we propose a generative statistical model, named Collaborative Dual-PLSA (CD-PLSA), to simultaneously capture both the domain distinction and commonality among multiple domains. Different from Probabilistic Latent Semantic Analysis (PLSA) with only one latent variable, the proposed model has two latent factors y and z, corresponding to word concept and document class, respectively. The shared commonality intertwines with the distinctions over multiple domains, and is also used as the bridge for knowledge transformation. An Expectation Maximization (EM) algorithm is developed to solve the CD-PLSA model, and further its distributed version is exploited to avoid uploading all the raw data to a centralized location and help to mitigate privacy concerns. After the training phase with all the data from multiple domains we propose to refine the immediate outputs using only the corresponding local data. In summary, we propose a two-phase method for cross-domain text classification, the first phase for collaborative training with all the data, and the second step for local refinement. Finally, we conduct extensive experiments over hundreds of classification tasks with multiple source domains and multiple target domains to validate the superiority of the proposed method over existing state-of-the-art methods of supervised and transfer learning. It is noted to mention that as shown by the experimental results CD-PLSA for the collaborative training is more tolerant of distribution differences, and the local refinement also gains significant improvement in terms of classification accuracy.",
"",
"We present a general framework for comparing multiple groups of documents. A bipartite graph model is proposed where document groups are represented as one node set and the comparison criteria are represented as the other node set. Using this model, we present basic algorithms to extract insights into similarities and differences among the document groups. Finally, we demonstrate the versatility of our framework through an analysis of NSF funding programs for basic research.",
"Given a collection of document groups, a natural question is to identify the differences among these groups. Although traditional document summarization techniques can summarize the content of the document groups one by one, there exists a great necessity to generate a summary of the differences among the document groups. In this article, we study a novel problem of summarizing the differences between document groups. A discriminative sentence selection method is proposed to extract the most discriminative sentences that represent the specific characteristics of each document group. Experiments and case studies on real-world data sets demonstrate the effectiveness of our proposed method."
]
} |
1510.07197 | 2950220747 | This paper presents a novel research problem on joint discovery of commonalities and differences between two individual documents (or document sets), called Comparative Document Analysis (CDA). Given any pair of documents from a document collection, CDA aims to automatically identify sets of quality phrases to summarize the commonalities of both documents and highlight the distinctions of each with respect to the other informatively and concisely. Our solution uses a general graph-based framework to derive novel measures on phrase semantic commonality and pairwise distinction , and guides the selection of sets of phrases by solving two joint optimization problems. We develop an iterative algorithm to integrate the maximization of phrase commonality or distinction measure with the learning of phrase-document semantic relevance in a mutually enhancing way. Experiments on text corpora from two different domains---scientific publications and news---demonstrate the effectiveness and robustness of the proposed method on comparing individual documents. Our case study on comparing news articles published at different dates shows the power of the proposed method on comparing document sets. | In particular, our work is related to @cite_28 since both try to derive common and distinct terms for a pair of related documents, but their work focuses on finding topic-related terms and selecting sentences based on such terms. They assume terms appearing in both documents as the common ones, and treat the remaining terms as distinct ones. As shown in our experiments, this method (labeled as WordMatch) suffers from low recall in finding common terms and low precision in finding distinct terms since it ignores semantic common phrases and pairwise distinct phrases. Similarly, Zhang al @cite_7 consider both common and distinct aspects in generating comparative summary for patent documents. They derive a term co-occurrence tree which can be used to extract sentences for summarization. However, they use all shared noun phrases between two documents as common terms, and apply feature selection techniques to find distinct terms. This method (see PatentCom in Sec. ) demonstrates poor performance on finding common terms due to the ignorance of semantic common phrases; although this method performs well in terms of the recall for distinct phrases, it achieves low precision, since it fails to model phrase generality and produce many overly-specific phrases. | {
"cite_N": [
"@cite_28",
"@cite_7"
],
"mid": [
"2951791621",
"2402992320"
],
"abstract": [
"We describe a new method for summarizing similarities and differences in a pair of related documents using a graph representation for text. Concepts denoted by words, phrases, and proper names in the document are represented positionally as nodes in the graph along with edges corresponding to semantic relations between items. Given a perspective in terms of which the pair of documents is to be summarized, the algorithm first uses a spreading activation technique to discover, in each document, nodes semantically related to the topic. The activated graphs of each document are then matched to yield a graph corresponding to similarities and differences between the pair, which is rendered in natural language. An evaluation of these techniques has been carried out.",
"Patent document retrieval, as a recall-orientated search task, does not allow missing relevant patent documents due to the great commercial value of patents and significant costs of processing a patent application or patent infringement case. Thus, it is important to retrieve all possible relevant documents rather than only a small subset of patents from the top ranked results. However, patents are often lengthy and rich in technical terms, and it often requires enormous human efforts to compare a given document with retrieved results. In this paper, we formulate the problem of comparing patent documents as a comparative summarization problem, and explore automatic strategies that generate comparative summaries to assist patent analysts in quickly reviewing any given patent document pairs. To this end, we present a novel approach, named PatentCom, which first extracts discriminative terms from each patent document, and then connects the dots on a term co-occurrence graph. In this way, we are able to comprehensively extract the gists of the two patent documents being compared, and meanwhile highlight their relationship in terms of commonalities and differences. Extensive quantitative analysis and case studies on real world patent documents demonstrate the effectiveness of our proposed approach."
]
} |
1510.07197 | 2950220747 | This paper presents a novel research problem on joint discovery of commonalities and differences between two individual documents (or document sets), called Comparative Document Analysis (CDA). Given any pair of documents from a document collection, CDA aims to automatically identify sets of quality phrases to summarize the commonalities of both documents and highlight the distinctions of each with respect to the other informatively and concisely. Our solution uses a general graph-based framework to derive novel measures on phrase semantic commonality and pairwise distinction , and guides the selection of sets of phrases by solving two joint optimization problems. We develop an iterative algorithm to integrate the maximization of phrase commonality or distinction measure with the learning of phrase-document semantic relevance in a mutually enhancing way. Experiments on text corpora from two different domains---scientific publications and news---demonstrate the effectiveness and robustness of the proposed method on comparing individual documents. Our case study on comparing news articles published at different dates shows the power of the proposed method on comparing document sets. | Another line of related work, referred to as comparative text mining @cite_6 , focuses on modeling latent comparison aspects and discovering common and collection-specific word clusters for each aspect. They adopt topic model @cite_5 @cite_6 @cite_20 and matrix factorization @cite_21 to present the common and distinct information by multinomial distributions of words. While latent comparison aspects can help enrich the comparison results, these methods still adopt bag-of-words representation which is often criticized for its unsatisfying readability @cite_1 . Furthermore, it is difficult to apply statistical topic modeling in comparative document analysis as the data statistic between two documents is insufficient, in particular when the documents are about emergent topics ( , Academia, News). Finally, our work is also related to comparing reviews in opinion mining @cite_3 @cite_19 @cite_8 @cite_22 and contrastive summarization for entities @cite_27 ---they also aim to find similarities and differences between two objects. However, these works are restricted to sentiment analysis. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_19",
"@cite_27",
"@cite_5",
"@cite_20"
],
"mid": [
"1988931981",
"2157470625",
"2057990742",
"2113855231",
"2112247328",
"1995281780",
"1569041844",
"2018650704",
"",
"2157589241"
],
"abstract": [
"This paper studies the problem of identifying comparative sentences in text documents. The problem is related to but quite different from sentiment opinion sentence identification or classification. Sentiment classification studies the problem of classifying a document or a sentence based on the subjective opinion of the author. An important application area of sentiment opinion identification is business intelligence as a product manufacturer always wants to know consumers' opinions on its products. Comparisons on the other hand can be subjective or objective. Furthermore, a comparison is not concerned with an object in isolation. Instead, it compares the object with others. An example opinion sentence is \"the sound quality of CD player X is poor\". An example comparative sentence is \"the sound quality of CD player X is not as good as that of CD player Y\". Clearly, these two sentences give different information. Their language constructs are quite different too. Identifying comparative sentences is also useful in practice because direct comparisons are perhaps one of the most convincing ways of evaluation, which may even be more important than opinions on each individual object. This paper proposes to study the comparative sentence identification problem. It first categorizes comparative sentences into different types, and then presents a novel integrated pattern discovery and supervised learning approach to identifying comparative sentences from text documents. Experiment results using three types of documents, news articles, consumer reviews of products, and Internet forum postings, show a precision of 79 and recall of 81 . More detailed results are given in the paper.",
"This paper presents a study of a novel summarization problem called contrastive opinion summarization (COS). Given two sets of positively and negatively opinionated sentences which are often the output of an existing opinion summarizer, COS aims to extract comparable sentences from each set of opinions and generate a comparative summary containing a set of contrastive sentence pairs. We formally formulate the problem as an optimization problem and propose two general methods for generating a comparative summary using the framework, both of which rely on measuring the content similarity and contrastive similarity of two sentences. We study several strategies to compute these two similarities. We also create a test data set for evaluating such a novel summarization problem. Experiment results on this test set show that the proposed methods are effective for generating comparative summaries of contradictory opinions.",
"Understanding large-scale document collections in an efficient manner is an important problem. Usually, document data are associated with other information (e.g., an author's gender, age, and location) and their links to other entities (e.g., co-authorship and citation networks). For the analysis of such data, we often have to reveal common as well as discriminative characteristics of documents with respect to their associated information, e.g., male- vs. female-authored documents, old vs. new documents, etc. To address such needs, this paper presents a novel topic modeling method based on joint nonnegative matrix factorization, which simultaneously discovers common as well as discriminative topics given multiple document sets. Our approach is based on a block-coordinate descent framework and is capable of utilizing only the most representative, thus meaningful, keywords in each topic through a novel pseudo-deflation approach. We perform both quantitative and qualitative evaluations using synthetic as well as real-world document data sets such as research paper collections and nonprofit micro-finance data. We show our method has a great potential for providing in-depth analyses by clearly identifying common and discriminative topics among multiple document sets.",
"Multinomial distributions over words are frequently used to model topics in text collections. A common, major challenge in applying all such topic models to any text mining problem is to label a multinomial topic model accurately so that a user can interpret the discovered topic. So far, such labels have been generated manually in a subjective way. In this paper, we propose probabilistic approaches to automatically labeling multinomial topic models in an objective way. We cast this labeling problem as an optimization problem involving minimizing Kullback-Leibler divergence between word distributions and maximizing mutual information between a label and a topic model. Experiments with user study have been done on two text data sets with different genres.The results show that the proposed labeling methods are quite effective to generate labels that are meaningful and useful for interpreting the discovered topic models. Our methods are general and can be applied to labeling topics learned through all kinds of topic models such as PLSA, LDA, and their variations.",
"In this paper, we define and study a novel text mining problem, which we refer to as Comparative Text Mining (CTM). Given a set of comparable text collections, the task of comparative text mining is to discover any latent common themes across all collections as well as summarize the similarity and differences of these collections along each common theme. This general problem subsumes many interesting applications, including business intelligence and opinion summarization. We propose a generative probabilistic mixture model for comparative text mining. The model simultaneously performs cross-collection clustering and within-collection clustering, and can be applied to an arbitrary set of comparable text collections. The model can be estimated efficiently using the Expectation-Maximization (EM) algorithm. We evaluate the model on two different text data sets (i.e., a news article data set and a laptop review data set), and compare it with a baseline clustering method also based on a mixture model. Experiment results show that the model is quite effective in discovering the latent common themes across collections and performs significantly better than our baseline mixture model.",
"To facilitate direct comparisons between different products, we present an approach to constructing short and comparative summaries based on product reviews. In particular, the user can view automatically aligned pairs of snippets describing reviewers' opinions on different features (also selected automatically by our approach) for two selected products. We propose a submodular objective function that avoids redundancy, that is efficient to optimize, and that aligns the snippets into pairs. Snippets are chosen from product reviews and thus easy to obtain. In our experiments, we show that the method constructs qualitatively good summaries, and that it can be tuned via supervised learning.",
"This paper presents a two-stage approach to summarizing multiple contrastive viewpoints in opinionated text. In the first stage, we use an unsupervised probabilistic approach to model and extract multiple viewpoints in text. We experiment with a variety of lexical and syntactic features, yielding significant performance gains over bag-of-words feature sets. In the second stage, we introduce Comparative LexRank, a novel random walk formulation to score sentences and pairs of sentences from opposite viewpoints based on both their representativeness of the collection as well as their contrastiveness with each other. Experimental results show that the proposed approach can generate informative summaries of viewpoints in opinionated text.",
"Contrastive summarization is the problem of jointly generating summaries for two entities in order to highlight their differences. In this paper we present an investigation into contrastive summarization through an implementation and evaluation of a contrastive opinion summarizer in the consumer reviews domain.",
"",
"Web 2.0 technology has enabled more and more people to freely express their opinions on the Web, making the Web an extremely valuable source for mining user opinions about all kinds of topics. In this paper we study how to automatically integrate opinions expressed in a well-written expert review with lots of opinions scattering in various sources such as blogspaces and forums. We formally define this new integration problem and propose to use semi-supervised topic models to solve the problem in a principled way. Experiments on integrating opinions about two quite different topics (a product and a political figure) show that the proposed method is effective for both topics and can generate useful aligned integrated opinion summaries. The proposed method is quite general. It can be used to integrate a well written review with opinions in an arbitrary text collection about any topic to potentially support many interesting applications in multiple domains."
]
} |
1510.06503 | 2950690021 | In this paper, we aim to automatically render aging faces in a personalized way. Basically, a set of age-group specific dictionaries are learned, where the dictionary bases corresponding to the same index yet from different dictionaries form a particular aging process pattern cross different age groups, and a linear combination of these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each subject may have extra personalized facial characteristics, e.g. mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular subject, yet much easier and more practical to get face pairs from neighboring age groups. Thus a personality-aware coupled reconstruction loss is utilized to learn the dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of our proposed solution over other state-of-the-arts in term of personalized aging progression, as well as the performance gain for cross-age face verification by synthesizing aging faces. | Age progression has been comprehensively reviewed in literature @cite_30 @cite_3 @cite_28 . As one of the early studies, Burt @cite_21 focused on creating average faces for different ages and transferring the facial differences between the average faces into the input face. This method gave an insight into the age progression task. Thereafter, some prototyping-based aging methods @cite_2 @cite_1 were proposed with different degrees of improvements. In particular, the paper @cite_1 leveraged the difference between the warped average faces (instead of the original average faces) based on the flow from average face to input face. A drawback of these methods is that the aging speed for each human is synchronous and no personalized characteristic is saved, which leads to similar aging results of many faces to each other. Although some researchers target at individual-specific face aging @cite_35 @cite_6 @cite_14 , lack of personality for aging faces is still a challenging problem. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_14",
"@cite_28",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_2"
],
"mid": [
"2105026179",
"",
"1481010459",
"2143151905",
"2085481337",
"",
"",
"2140190219",
"2152679347"
],
"abstract": [
"Human age, as an important personal trait, can be directly inferred by distinct patterns emerging from the facial appearance. Derived from rapid advances in computer graphics and machine vision, computer-based age synthesis and estimation via faces have become particularly prevalent topics recently because of their explosively emerging real-world applications, such as forensic art, electronic customer relationship management, security control and surveillance monitoring, biometrics, entertainment, and cosmetology. Age synthesis is defined to rerender a face image aesthetically with natural aging and rejuvenating effects on the individual face. Age estimation is defined to label a face image automatically with the exact age (year) or the age group (year range) of the individual face. Because of their particularity and complexity, both problems are attractive yet challenging to computer-based application system designers. Large efforts from both academia and industry have been devoted in the last a few decades. In this paper, we survey the complete state-of-the-art techniques in the face image-based age synthesis and estimation topics. Existing models, popular algorithms, system performances, technical difficulties, popular face aging databases, evaluation protocols, and promising future directions are also provided with systematic discussions.",
"",
"",
"Facial aging, a new dimension that has recently been added to the problem of face recognition, poses interesting theoretical and practical challenges to the research community. The problem which originally generated interest in the psychophysics and human perception community has recently found enhanced interest in the computer vision community. How do humans perceive age? What constitutes an age-invariant signature that can be derived from faces? How compactly can the facial growth event be described? How does facial aging impact recognition performance? In this paper, we give a thorough analysis on the problem of facial aging and further provide a complete account of the many interesting studies that have been performed on this topic from different fields. We offer a comparative analysis of various approaches that have been proposed for problems such as age estimation, appearance prediction, face verification, etc. and offer insights into future research on this topic.",
"This study investigated visual cues to age by using facial composites which blend shape and colour information from multiple faces. Baseline measurements showed that perceived age of adult male faces is on average an accurate index of their chronological age over the age range 20-60 years. Composite images were made from multiple images of different faces by averaging face shape and then blending red, green and blue intensity (RGB colour) across comparable pixels. The perceived age of these composite or blended images depended on the age bracket of the component faces. Blended faces were, however, rated younger than their component faces, a trend that became more marked with increased component age. The techniques used provide an empirical definition of facial changes with age that are biologically consistent across a sample population. The perceived age of a blend of old faces was increased by exaggerating the RGB colour differences of each pixel relative to a blend of young faces. This effect on perceived age was not attributable to enhanced contrast or colour saturation. Age-related visual cues defined from the differences between blends of young and old faces were applied to individual faces. These transformations increased perceived age.",
"",
"",
"Aging has considerable visual effects on the human face and is difficult to simulate using a universally-applicable global model. In this paper, we focus on the hypothesis that the patterns of age progression (and regression) are related to the face concerned, as the latter implicitly captures the characteristics of gender, ethnic origin, and age group, as well as possibly the person-specific development patterns of the individual. We use a data-driven framework for automatic image-based facial transformation in conjunction with a database of facial images. We build a novel parameterized model for encoding age-transformation in addition with the traditional model for face description. We utilize evolutionary computing to learn the relationship between the two models. To support this work, we also developed a new image warping algorithm based on non-uniform radial basis functions (NURBFs). Evolutionary computing was also used to handle the large parameter space associated with NURBFs. In comparison with several different methods, it consistently provides the best results against the ground truth.",
"Transforming facial images along perceived dimensions (such as age, gender, race, or health) has application in areas as diverse as psychology, medicine, and forensics. We can use prototype images to define the salient features of a particular face classification (for example, European female adult or East-Asian male child). We then use the differences between two prototypes to define an axis of transformation, such as younger to older. By applying these changes to a given input face, we can change its apparent age, race, or gender. Psychological investigations reveal a limitation with existing methods that's particularly apparent when changing the age of faces. We relate the problem to the loss of facial textures (such as stubble and wrinkles) in the prototypes due to the blending process. We review the existing face prototyping and transformation methods and present a new, wavelet-based method for prototyping and transforming facial textures"
]
} |
1510.06503 | 2950690021 | In this paper, we aim to automatically render aging faces in a personalized way. Basically, a set of age-group specific dictionaries are learned, where the dictionary bases corresponding to the same index yet from different dictionaries form a particular aging process pattern cross different age groups, and a linear combination of these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each subject may have extra personalized facial characteristics, e.g. mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular subject, yet much easier and more practical to get face pairs from neighboring age groups. Thus a personality-aware coupled reconstruction loss is utilized to learn the dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of our proposed solution over other state-of-the-arts in term of personalized aging progression, as well as the performance gain for cross-age face verification by synthesizing aging faces. | Modeling-based age progression which considers shape and texture synthesis simultaneously is another popular idea @cite_5 . There have been quite a quantity of modeling-based age progression methods proposed, including active appearance model @cite_7 , craniofacial growth model @cite_11 , and-or graph model @cite_17 , statistical model @cite_18 and implicit function @cite_31 @cite_8 , etc. Generally, to model large appearance changes over a long-term face aging sequence, modeling-based age progression requires sufficient training data. Suo @cite_23 attempted to learn long-term aging patterns from available short-term aging databases by a proposed concatenational graph evolution aging model. | {
"cite_N": [
"@cite_18",
"@cite_11",
"@cite_7",
"@cite_8",
"@cite_23",
"@cite_5",
"@cite_31",
"@cite_17"
],
"mid": [
"1516424378",
"2069261721",
"2039140324",
"",
"2014102203",
"2138173495",
"2139741065",
"2123497994"
],
"abstract": [
"This thesis presents an approach to the modeling of facial aging using and extending the Morphable Model technique. For modeling the face variation across individuals, facial expressions, and physical attributes, we collected 3D face scans of 298 persons. The 3D face scans where acquired with a structured light 3D scanner, which we improved in collaboration with the manufacturer to achieve superior geometry and texture quality. Moreover, we developed an efficient way to measure fine skin structure and reflection properties with the scanner. The collected face scans have been used to build the Basel Face Model, a new publicly available Morphable Model. Using the 3D scans we learn the correlation between physical attributes such as weight, height, and especially age and faces. With the learned correlation, we present a novel way to simultaneously manipulate different attributes and demonstrate the capability to model changes caused by aging. Using the attributes of the face model in conjunction with a skull model developed in the same research group, we present a method to reconstruct faces from skull shapes which considers physical attributes, as the body weight, age etc. The most important aspect of facial aging that can not be simulated with the Morphable Model is the appearance of facial wrinkles. In this work we present a novel approach to synthesize age wrinkles based on statistics. Our wrinkle synthesis consists of two main parts: The learning of a generative model of wrinkle constellations, and the modeling of their visual appearance. For learning the constellations we use kernel density estimation of manually labeled wrinkles to estimate the wrinkle occurrence probability. To learn the visual appearance of wrinkles we use the fine scale skin structure captured with our improved scanning method. Our results show that the combination of the attribute fitting based aging and the wrinkle synthesis, facilitate a simulation of visually convincing progressive aging. The method is without restrictions applicable to any face that can be represented by the Morphable Model.",
"The phenomenon of \"aging\" in humans leads to significant variations in facial features. Various factors such as bone growth, ethnicity and dietary habits influence the facial aging pattern. This increases the difficulty in performing automated face recognition. In this paper, we propose an algorithm that improves the performance of face recognition by applying the bacteria foraging fusion algorithm. The proposed algorithm mitigates the effect of facial changes caused due to aging by combining the LBP features of global and local facial regions at match score level, by means of the bacteria foraging fusion algorithm. Experimental results are presented using the FG-Net and IIITDelhi face aging databases. The IIITDelhi database, which has been collected by the authors, consists of over 2600 age-separated labeled face images of 102 individuals. To account for real life and natural conditions, images include changes in the face due to illumination, pose, and presence of accessories such as eyeglasses. The results demonstrate that the proposed approach outperforms traditional fusion schemes, existing algorithms and a commercial system.",
"The process of aging causes significant alterations in the facial appearance of individuals. When compared with other sources of variation in face images, appearance variation due to aging displays some unique characteristics. Changes in facial appearance due to aging can even affect discriminatory facial features, resulting in deterioration of the ability of humans and machines to identify aged individuals. We describe how the effects of aging on facial appearance can be explained using learned age transformations and present experimental results to show that reasonably accurate estimates of age can be made for unseen images. We also show that we can improve our results by taking into account the fact that different individuals age in different ways and by considering the effect of lifestyle. Our proposed framework can be used for simulating aging effects on new face images in order to predict how an individual might look like in the future or how he she used to look in the past. The methodology presented has also been used for designing a face recognition system, robust to aging variation. In this context, the perceived age of the subjects in the training and test images is normalized before the training and classification procedure so that aging variation is eliminated. Experimental results demonstrate that, when age normalization is used, the performance of our face recognition system can be improved.",
"",
"Modeling the long-term face aging process is of great importance for face recognition and animation, but there is a lack of sufficient long-term face aging sequences for model learning. To address this problem, we propose a CONcatenational GRaph Evolution (CONGRE) aging model, which adopts decomposition strategy in both spatial and temporal aspects to learn long-term aging patterns from partially dense aging databases. In spatial aspect, we build a graphical face representation, in which a human face is decomposed into mutually interrelated subregions under anatomical guidance. In temporal aspect, the long-term evolution of the above graphical representation is then modeled by connecting sequential short-term patterns following the Markov property of aging process under smoothness constraints between neighboring short-term patterns and consistency constraints among subregions. The proposed model also considers the diversity of face aging by proposing probabilistic concatenation strategy between short-term patterns and applying scholastic sampling in aging prediction. In experiments, the aging prediction results generated by the learned aging models are evaluated both subjectively and objectively to validate the proposed model.",
"This paper is a summary of findings of adult age-related craniofacial morphological changes. Our aims are two-fold: (1) through a review of the literature we address the factors influencing craniofacial aging, and (2) the general ways in which a head and face age in adulthood. We present findings on environmental and innate influences on face aging, facial soft tissue age changes, and bony changes in the craniofacial and dentoalveolar skeleton. We then briefly address the relevance of this information to forensic science research and applications, such as the development of computer facial age-progression and face recognition technologies, and contributions to forensic sketch artistry.",
"Due to the fact that the aging human face encompasses skull bones, facial muscles, and tissues, we render it using the effects of flaccidity through the observation of family groups categorized by sex, race and age. Considering that patterns of aging are consistent, facial ptosis becomes manifest toward the end of the fourth decade. In order to simulate facial aging according to these patterns, we used surfaces with control points so that it was possible to represent the effect of aging through flaccidity. The main use of these surfaces is to simulate flaccidity and aging consequently.",
"In this paper, we present a compositional and dynamic model for face aging. The compositional model represents faces in each age group by a hierarchical And-or graph, in which And nodes decompose a face into parts to describe details (e.g., hair, wrinkles, etc.) crucial for age perception and Or nodes represent large diversity of faces by alternative selections. Then a face instance is a transverse of the And-or graph-parse graph. Face aging is modeled as a Markov process on the parse graph representation. We learn the parameters of the dynamic model from a large annotated face data set and the stochasticity of face aging is modeled in the dynamics explicitly. Based on this model, we propose a face aging simulation and prediction algorithm. Inversely, an automatic age estimation algorithm is also developed under this representation. We study two criteria to evaluate the aging results using human perception experiments: (1) the accuracy of simulation: whether the aged faces are perceived of the intended age group, and (2) preservation of identity: whether the aged faces are perceived as the same person. Quantitative statistical analysis validates the performance of our aging model and age estimation algorithm."
]
} |
1510.06879 | 1837104076 | Subtyping is a crucial ingredient of session type theory and its applications, notably to programming language implementations. In this paper, we study effective ways to check whether a session type is a subtype of another by applying a characteristic formulae approach to the problem. Our core contribution is an algorithm to generate a modal mu-calculus formula that characterises all the supertypes (or subtypes) of a given type. Subtyping checks can then be off-loaded to model checkers, thus incidentally yielding an efficient algorithm to check safety of session types, soundly and completely. We have implemented our theory and compared its cost with other classical subtyping algorithms. | Characteristic formulae for finite processes were first studied in @cite_23 , then in @cite_12 for finite-state processes. Since then the theory has been studied extensively @cite_19 @cite_36 @cite_37 @cite_11 @cite_27 @cite_10 @cite_9 for most of the van Glabbeek's spectrum @cite_15 and in different settings (e.g., time @cite_31 and probabilistic @cite_13 ). See @cite_9 @cite_37 for a detailed historical account of the field. This is the first time characteristic formulae are applied to the field of session types. A recent work @cite_9 proposes a general framework to obtain characteristic formula constructions for simulation-like relation for free''. We chose to follow @cite_12 as it was a better fit for session types as they allow for a straightforward inductive construction of a characteristic formula. Moreover, @cite_12 uses the standard @math -calculus which allowed us to integrate our theory with an existing model checker. | {
"cite_N": [
"@cite_13",
"@cite_37",
"@cite_31",
"@cite_36",
"@cite_9",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_15",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"180602404",
"",
"2111205369",
"",
"",
"2025280888",
"",
"1968829301",
"",
"2133147428",
"",
"1550474961"
],
"abstract": [
"Recently, a general framework on characteristic formulae was proposed by It offers a simple theory that allows one to easily obtain characteristic formulae of many non-probabilistic behavioral relations. Our paper studies their techniques in a probabilistic setting. We provide a general method for determining characteristic formulae of behavioral relations for probabilistic automata using fixed-point probability logics. We consider such behavioral relations as simulations and bisimulations, probabilistic bisimulations, probabilistic weak simulations, and probabilistic forward simulations. This paper shows how their constructions and proofs can follow from a single common technique.",
"",
"This paper offers characteristic formula constructions in the real-time logic L ν for several behavioural relations between (states of) timed automata. The behavioural relations studied in this work are timed (bi)similarity, timed ready simulation, faster-than bisimilarity and timed trace inclusion. The characteristic formulae delivered by our constructions have size which is linear in that of the timed automaton they logically describe. This also applies to the characteristic formula for timed bisimulation equivalence, for which an exponential space construction was previously offered by Laroussinie, Larsen and Weise.",
"",
"",
"Abstract Characteristic formulae have been introduced by Graf and Sifakis to relate equational reasoning about proceses to reasoning in a modal logic, and therefore to allow proofs about processes to be carried out in a logical framework. This work, which concerned finite processes and bisimulation-like equivalences, has later on been extended to finite state processes and further equivalences. Based upon an intuitionistic understanding of Hennessy-Milner Logic (HML) with mutual recursion, we extend these results to cover bisimulation-like preorders, which are sensitive to liveness properties. This demonstrates the expressive power of intuitionistically interpreted HML with mutual recursion, and it builds the theoretical basis for a uniform and efficient method to automatically verify bisimulation-like relations between processes by means of model checking.",
"",
"We propose a translation method of finite terms of CCS into formulas of a modal language representing their class of observational congruence. For this purpose, we define a modal language and a function associating with any finite term of CCS a formula of the language, satisfied by the term. Furthermore, this function is such that two terms are congruent if and only if the corresponding formulas are equivalent. The translation method consists in associating with operations on terms (action, +) operations on the corresponding formulas. This work is a first step towards the definition of a modal language with modalities expressing both possibility and inevitability and which is compatible with observational congruence.",
"",
"Abstract This paper shows how modal mu-calculus formulae characterizing finite-state processes up to strong or weak bisimulation can be derived directly from the well-known greatest fixpoint characterizations of the bisimulation relations. Our derivation simplifies earlier proofs for the strong bisimulation case and, by virtue of derivation, immediately generalizes to various other bisimulation-like relations, in particular weak bisimulation.",
"",
"This paper develops a model-checking algorithm for a fragment of the modal mu-calculus and shows how it may be applied to the efficient computation of behavioral relations between processes. The algorithm's complexity is proportional to the product of the size of the process and the size of the formula, and thus improves on the best existing algorithm for such a fixed point logic. The method for computing preorders that the model checker induces is also more efficient than known algorithms."
]
} |
1510.06879 | 1837104076 | Subtyping is a crucial ingredient of session type theory and its applications, notably to programming language implementations. In this paper, we study effective ways to check whether a session type is a subtype of another by applying a characteristic formulae approach to the problem. Our core contribution is an algorithm to generate a modal mu-calculus formula that characterises all the supertypes (or subtypes) of a given type. Subtyping checks can then be off-loaded to model checkers, thus incidentally yielding an efficient algorithm to check safety of session types, soundly and completely. We have implemented our theory and compared its cost with other classical subtyping algorithms. | Our approach can be easily: ( @math ) adapted to types for the @math -calculus (see Appendix ) and ( @math ) extended to session types that carry other () session types, e.g., see @cite_30 @cite_16 , by simply applying the algorithm recursively on the carried types. For instance, to check @math one can check the subtyping for the outer-most types, while building constraints, i.e., @math , to be checked later on, by re-applying the algorithm. | {
"cite_N": [
"@cite_30",
"@cite_16"
],
"mid": [
"2088962847",
"2023127104"
],
"abstract": [
"Extending the pi calculus with the session types proposed by allows high-level specifications of structured patterns of communication, such as client-server protocols, to be expressed as types and verified by static typechecking. We define a notion of subtyping for session types, which allows protocol specifications to be extended in order to describe richer behaviour; for example, an implemented server can be refined without invalidating type-correctness of an overall system. We formalize the syntax, operational semantics and typing rules of an extended pi calculus, prove that typability guarantees absence of run-time communication errors, and show that the typing rules can be transformed into a practical typechecking algorithm.",
"Subtyping in concurrency has been extensively studied since early 1990s as one of the most interesting issues in type theory. The correctness of subtyping relations has been usually provided as the soundness for type safety. The converse direction, the completeness, has been largely ignored in spite of its usefulness to define the greatest subtyping relation ensuring type safety. This paper formalises preciseness (i.e. both soundness and completeness) of subtyping for mobile processes and studies it for the synchronous and the asynchronous session calculi. We first prove that the well-known session subtyping, the branching-selection subtyping, is sound and complete for the synchronous calculus. Next we show that in the asynchronous calculus, this subtyping is incomplete for type-safety: that is, there exist session types T and S such that T can safely be considered as a subtype of S, but T ≤ S is not derivable by the subtyping. We then propose an asynchronous subtyping system which is sound and complete for the asynchronous calculus. The method gives a general guidance to design rigorous channel-based subtypings respecting desired safety properties."
]
} |
1510.06879 | 1837104076 | Subtyping is a crucial ingredient of session type theory and its applications, notably to programming language implementations. In this paper, we study effective ways to check whether a session type is a subtype of another by applying a characteristic formulae approach to the problem. Our core contribution is an algorithm to generate a modal mu-calculus formula that characterises all the supertypes (or subtypes) of a given type. Subtyping checks can then be off-loaded to model checkers, thus incidentally yielding an efficient algorithm to check safety of session types, soundly and completely. We have implemented our theory and compared its cost with other classical subtyping algorithms. | The present work paves the way for new connections between session types and modal fixpoint logic or model checking theories. It is a basis for upcoming connections between model checking and classical problems of session types, such as the asynchronous subtyping of @cite_16 and multiparty compatibility checking @cite_2 @cite_32 . We are also considering applying model checking approaches to session types with probabilistic, logical @cite_7 , or time @cite_20 @cite_14 annotations. Finally, we remark that @cite_16 also establishes that subtyping (cf. Definition ) is (but not complete) wrt. the semantics of session types, which models programs that communicate through FIFO buffers. Thus, our new conditions (items @math )- @math of Theorem ) also imply safety @math in the asynchronous setting. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_32",
"@cite_2",
"@cite_16",
"@cite_20"
],
"mid": [
"169399732",
"1611165480",
"",
"2076742248",
"2023127104",
"2281995753"
],
"abstract": [
"We propose a typing theory, based on multiparty session types, for modular verification of real-time choreographic interactions. To model real-time implementations, we introduce a simple calculus with delays and a decidable static proof system. The proof system ensures type safety and time-error freedom, namely processes respect the prescribed timing and causalities between interactions. A decidable condition on timed global types guarantees time-progress for validated processes with delays, and gives a sound and complete characterisation of a new class of CTAs with general topologies that enjoys progress and liveness.",
"Design by Contract (DbC) promotes reliable software development through elaboration of type signatures for sequential programs with logical predicates. This paper presents an assertion method, based on the π-calculus with full recursion, which generalises the notion of DbC to multiparty distributed interactions to enable effective specification and verification of distributed multiparty protocols. Centring on global assertions and their projections onto endpoint assertions, our method allows clear specifications for typed sessions, constraining the content of the exchanged messages, the choice of sub-conversations to follow, and invariants on recursions. The paper presents key theoretical foundations of this framework, including a sound and relatively complete compositional proof system for verifying processes against assertions.",
"",
"Graphical choreographies, or global graphs, are general multiparty session specifications featuring expressive constructs such as forking, merging, and joining for representing application-level protocols. Global graphs can be directly translated into modelling notations such as BPMN and UML. This paper presents an algorithm whereby a global graph can be constructed from asynchronous interactions represented by communicating finite-state machines (CFSMs). Our results include: a sound and complete characterisation of a subset of safe CFSMs from which global graphs can be constructed; an algorithm to translate CFSMs to global graphs; a time complexity analysis; and an implementation of our theory, as well as an experimental evaluation.",
"Subtyping in concurrency has been extensively studied since early 1990s as one of the most interesting issues in type theory. The correctness of subtyping relations has been usually provided as the soundness for type safety. The converse direction, the completeness, has been largely ignored in spite of its usefulness to define the greatest subtyping relation ensuring type safety. This paper formalises preciseness (i.e. both soundness and completeness) of subtyping for mobile processes and studies it for the synchronous and the asynchronous session calculi. We first prove that the well-known session subtyping, the branching-selection subtyping, is sound and complete for the synchronous calculus. Next we show that in the asynchronous calculus, this subtyping is incomplete for type-safety: that is, there exist session types T and S such that T can safely be considered as a subtype of S, but T ≤ S is not derivable by the subtyping. We then propose an asynchronous subtyping system which is sound and complete for the asynchronous calculus. The method gives a general guidance to design rigorous channel-based subtypings respecting desired safety properties.",
"This paper studies safety, progress, and non-zeno properties of Communicating Timed Automata (CTAs), which are timed automata (TA) extended with unbounded communication channels, and presents a procedure to build timed global specifications from systems of CTAs. We define safety and progress properties for CTAs by extending properties studied in communicating finite-state machines to the timed setting. We then study non-zenoness for CTAs; our aim is to prevent scenarios in which the participants have to execute an infinite number of actions in a finite amount of time. We propose sound and decidable conditions for these properties, and demonstrate the practicality of our approach with an implementation and experimental evaluations of our theory."
]
} |
1510.06714 | 2233560704 | Many user studies of home automation, as the most familiar representative of the Internet of Things, have shown the difficulty of developing technology that users understand and like. It helps to state requirements as largely-independent features, but features are not truly independent, so this incurs the cost of managing and explaining feature interactions. We propose to compose features at runtime, resolving their interactions by means of priority. Although the basic idea is simple, its details must be designed to make users comfortable by balancing manual and automatic control. On the technical side, its details must be designed to allow meaningful separation of features and maximum generality. As evidence that our composition mechanism achieves its goals, we present three substantive examples of home automation, and the results of a user study to investigate comprehension of feature interactions. A survey of related work shows that this proposal occupies a sensible place in a design space whose dimensions include actuator type, detection versus resolution strategies, and modularity. | It is also possible to view home automation in terms of high-level actions that must be completed over time @cite_5 . In the remainder of this section, however, we turn to the more typical approach to the Internet of Things, in which actuators can change settings at any time (as described in ). | {
"cite_N": [
"@cite_5"
],
"mid": [
"2418446820"
],
"abstract": [
"Technology to support care at home is a promising alternative to traditional approaches. However, home care systems present significant technical challenges. For example, it is difficult to make such systems flexible, adaptable, and controllable by users. The authors have created a prototype system that uses policy-based management of home care services. Conflict detection and resolution for home care policies have been investigated. We identify three types of conflicts in policy-based home care systems: conflicts that result from apparently separate triggers, conflicts among policies of multiple stakeholders, and conflicts resulting from apparently unrelated actions. We systematically analyse the types of policy conflicts, and propose solutions to enhance our existing policy language and policy system to tackle these conflicts. The enhanced solutions are illustrated through examples."
]
} |
1510.06714 | 2233560704 | Many user studies of home automation, as the most familiar representative of the Internet of Things, have shown the difficulty of developing technology that users understand and like. It helps to state requirements as largely-independent features, but features are not truly independent, so this incurs the cost of managing and explaining feature interactions. We propose to compose features at runtime, resolving their interactions by means of priority. Although the basic idea is simple, its details must be designed to make users comfortable by balancing manual and automatic control. On the technical side, its details must be designed to allow meaningful separation of features and maximum generality. As evidence that our composition mechanism achieves its goals, we present three substantive examples of home automation, and the results of a user study to investigate comprehension of feature interactions. A survey of related work shows that this proposal occupies a sensible place in a design space whose dimensions include actuator type, detection versus resolution strategies, and modularity. | Currently many new networked devices ( things'' in the Internet of Things) are being offered to consumers by enterprises large and small. As reported in @cite_21 , these offers almost always promise that users can program control of these devices with if-then'' or trigger-action'' rules. This assumption is also adopted in some research projects @cite_2 @cite_9 @cite_21 . | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_2"
],
"mid": [
"2143140262",
"2086239403",
"2463943663"
],
"abstract": [
"This paper investigates the feature interaction problem, known from traditional telephony environments, in the context of Internet personal appliances (IPA). IPAs are dedicated consumer devices which contain at least one network processor. They include the Internet alarm clock, which takes into account road conditions or expected arrival times of air planes when setting the alarm time, or the Internet enabled fridge, which keeps an inventory of groceries and issues orders to suppliers. The results of our investigation are threefold. The first part of this paper introduces a service taxonomy supported by a list of example services. The second part discusses feature interactions between services for appliances. A classification for such interactions and example interactions and possible conflicts are presented. The final part contains an outline for an approach to handle such interactions.",
"We investigate the practicality of letting average users customize smart-home devices using trigger-action (\"if, then\") programming. We find trigger-action programming can express most desired behaviors submitted by participants in an online study. We identify a class of triggers requiring machine learning that has received little attention. We evaluate the uniqueness of the 67,169 trigger-action programs shared on IFTTT.com, finding that real users have written a large number of unique trigger-action interactions. Finally, we conduct a 226-participant usability test of trigger-action programming, finding that inexperienced users can quickly learn to create programs containing multiple triggers or actions.",
"Networked devices such as locks and thermostats are now cheaply available, which is accelerating the adoption of home automation. But unintended behaviors of programs that control automated homes can pose severe problems, including security risks (e.g., accidental disarming of alarms). We facilitate predictable control of automated homes by letting users “fast forward” their program and observe its possible future behaviors under different sequences of inputs and environmental conditions. A key challenge that we face, which is not addressed by existing program exploration techniques, is to systematically handle time because home automation programs depend intimately on absolute and relative timing of inputs. We develop an approach that models programs as timed automata and incorporates novel mechanisms to enable scalable and comprehensive exploration of their behavior. We implement our approach in a tool called DeLorean and apply it to 10 real home automation programs. We find that it can fast forward these programs 3.6 times to 36K times faster than real time and uncover unintended behaviors in them."
]
} |
1510.06714 | 2233560704 | Many user studies of home automation, as the most familiar representative of the Internet of Things, have shown the difficulty of developing technology that users understand and like. It helps to state requirements as largely-independent features, but features are not truly independent, so this incurs the cost of managing and explaining feature interactions. We propose to compose features at runtime, resolving their interactions by means of priority. Although the basic idea is simple, its details must be designed to make users comfortable by balancing manual and automatic control. On the technical side, its details must be designed to allow meaningful separation of features and maximum generality. As evidence that our composition mechanism achieves its goals, we present three substantive examples of home automation, and the results of a user study to investigate comprehension of feature interactions. A survey of related work shows that this proposal occupies a sensible place in a design space whose dimensions include actuator type, detection versus resolution strategies, and modularity. | Trigger-action rules are independent and inherently unstructured, so it is no surprise that even experienced programmers find sets of trigger-action rules difficult to write and reason about (a hobbyist says, . . . it has taken me literally YEARS of these types of mistakes to iron out all the kinks'' @cite_2 ). | {
"cite_N": [
"@cite_2"
],
"mid": [
"2463943663"
],
"abstract": [
"Networked devices such as locks and thermostats are now cheaply available, which is accelerating the adoption of home automation. But unintended behaviors of programs that control automated homes can pose severe problems, including security risks (e.g., accidental disarming of alarms). We facilitate predictable control of automated homes by letting users “fast forward” their program and observe its possible future behaviors under different sequences of inputs and environmental conditions. A key challenge that we face, which is not addressed by existing program exploration techniques, is to systematically handle time because home automation programs depend intimately on absolute and relative timing of inputs. We develop an approach that models programs as timed automata and incorporates novel mechanisms to enable scalable and comprehensive exploration of their behavior. We implement our approach in a tool called DeLorean and apply it to 10 real home automation programs. We find that it can fast forward these programs 3.6 times to 36K times faster than real time and uncover unintended behaviors in them."
]
} |
1510.06714 | 2233560704 | Many user studies of home automation, as the most familiar representative of the Internet of Things, have shown the difficulty of developing technology that users understand and like. It helps to state requirements as largely-independent features, but features are not truly independent, so this incurs the cost of managing and explaining feature interactions. We propose to compose features at runtime, resolving their interactions by means of priority. Although the basic idea is simple, its details must be designed to make users comfortable by balancing manual and automatic control. On the technical side, its details must be designed to allow meaningful separation of features and maximum generality. As evidence that our composition mechanism achieves its goals, we present three substantive examples of home automation, and the results of a user study to investigate comprehension of feature interactions. A survey of related work shows that this proposal occupies a sensible place in a design space whose dimensions include actuator type, detection versus resolution strategies, and modularity. | With programs in the form of independent trigger-action rules, there is little choice but to build tools to explore the behavior of these programs, and hope that users will be able to provide invariants to be checked, or detect bugs by scanning the output for anomalies @cite_2 . | {
"cite_N": [
"@cite_2"
],
"mid": [
"2463943663"
],
"abstract": [
"Networked devices such as locks and thermostats are now cheaply available, which is accelerating the adoption of home automation. But unintended behaviors of programs that control automated homes can pose severe problems, including security risks (e.g., accidental disarming of alarms). We facilitate predictable control of automated homes by letting users “fast forward” their program and observe its possible future behaviors under different sequences of inputs and environmental conditions. A key challenge that we face, which is not addressed by existing program exploration techniques, is to systematically handle time because home automation programs depend intimately on absolute and relative timing of inputs. We develop an approach that models programs as timed automata and incorporates novel mechanisms to enable scalable and comprehensive exploration of their behavior. We implement our approach in a tool called DeLorean and apply it to 10 real home automation programs. We find that it can fast forward these programs 3.6 times to 36K times faster than real time and uncover unintended behaviors in them."
]
} |
1510.06714 | 2233560704 | Many user studies of home automation, as the most familiar representative of the Internet of Things, have shown the difficulty of developing technology that users understand and like. It helps to state requirements as largely-independent features, but features are not truly independent, so this incurs the cost of managing and explaining feature interactions. We propose to compose features at runtime, resolving their interactions by means of priority. Although the basic idea is simple, its details must be designed to make users comfortable by balancing manual and automatic control. On the technical side, its details must be designed to allow meaningful separation of features and maximum generality. As evidence that our composition mechanism achieves its goals, we present three substantive examples of home automation, and the results of a user study to investigate comprehension of feature interactions. A survey of related work shows that this proposal occupies a sensible place in a design space whose dimensions include actuator type, detection versus resolution strategies, and modularity. | The alternative is to---in effect---group related rules into more cohesive modules such as finite-state machines. For example, the program in Figure can be viewed as a group of related trigger-action rules, one per state transition; the states of the machine are values of an internal variable relating the effects of the individual rules. The enhanced structure provides guidelines about robust programming. For example, a well-structured feature that turns a switch on for some reason also includes the logic to turn it off when the reason has passed. It is interesting to note that the self-transitions on the locked and unlocked states in Figure prevent the bug detected in @cite_2 , which is to forget to refresh a timer when two events causing the same setting occur close together. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2463943663"
],
"abstract": [
"Networked devices such as locks and thermostats are now cheaply available, which is accelerating the adoption of home automation. But unintended behaviors of programs that control automated homes can pose severe problems, including security risks (e.g., accidental disarming of alarms). We facilitate predictable control of automated homes by letting users “fast forward” their program and observe its possible future behaviors under different sequences of inputs and environmental conditions. A key challenge that we face, which is not addressed by existing program exploration techniques, is to systematically handle time because home automation programs depend intimately on absolute and relative timing of inputs. We develop an approach that models programs as timed automata and incorporates novel mechanisms to enable scalable and comprehensive exploration of their behavior. We implement our approach in a tool called DeLorean and apply it to 10 real home automation programs. We find that it can fast forward these programs 3.6 times to 36K times faster than real time and uncover unintended behaviors in them."
]
} |
1510.06714 | 2233560704 | Many user studies of home automation, as the most familiar representative of the Internet of Things, have shown the difficulty of developing technology that users understand and like. It helps to state requirements as largely-independent features, but features are not truly independent, so this incurs the cost of managing and explaining feature interactions. We propose to compose features at runtime, resolving their interactions by means of priority. Although the basic idea is simple, its details must be designed to make users comfortable by balancing manual and automatic control. On the technical side, its details must be designed to allow meaningful separation of features and maximum generality. As evidence that our composition mechanism achieves its goals, we present three substantive examples of home automation, and the results of a user study to investigate comprehension of feature interactions. A survey of related work shows that this proposal occupies a sensible place in a design space whose dimensions include actuator type, detection versus resolution strategies, and modularity. | Some work with features or at least cohesive functional modules also focuses on detecting feature interactions by finding logical conflict over shared variables, by model checking or other means @cite_11 @cite_18 . This approach deprives users and programmers of the feature simplification that comes from using automatic resolution of feature interactions as a domain-specific exception mechanism. | {
"cite_N": [
"@cite_18",
"@cite_11"
],
"mid": [
"2029250021",
"2106922353"
],
"abstract": [
"Smart-space modeling is a fundamental requirement towards the achievement of safe integration of pervasive systems and devices. Traditional behavior models lack explicit representation of rich semantic for ubicomp systems. We propose a graph-based model supporting ubicomp's computational, physical, user and temporal contextual information. The model is part of a solution to tackle the interference problem, which results from unexpected interactions involving users and entertainment, communication, and health-related devices. This is a major concern when, for instance, daily routines are affected by different types of applications. The proposed graph-based interference detection is integrated in the Safe Home Care reflective platform, and allows reifying system's state and simulating home care scenarios. A set of home care scenarios are used to assess the applicability and correctness of our approach.",
"Home appliance networks now have the capability of integrating different features of independent appliances to provide value-added services. Concurrent execution of these services, however, can cause unexpected problems, even when each service is independently correct. This paper addresses the issue of detecting such interactions between services. We propose an approach that consists of two steps. In the first step, a model is developed to capture the behavior of the services and the interactions between them and users. In the second step, the model is automatically analyzed to see if possible interactions exist. This automatic analysis can be effectively performed with model checking techniques. The usefulness of the proposed approach is demonstrated through a case study, where several interactions were successfully detected."
]
} |
1510.06714 | 2233560704 | Many user studies of home automation, as the most familiar representative of the Internet of Things, have shown the difficulty of developing technology that users understand and like. It helps to state requirements as largely-independent features, but features are not truly independent, so this incurs the cost of managing and explaining feature interactions. We propose to compose features at runtime, resolving their interactions by means of priority. Although the basic idea is simple, its details must be designed to make users comfortable by balancing manual and automatic control. On the technical side, its details must be designed to allow meaningful separation of features and maximum generality. As evidence that our composition mechanism achieves its goals, we present three substantive examples of home automation, and the results of a user study to investigate comprehension of feature interactions. A survey of related work shows that this proposal occupies a sensible place in a design space whose dimensions include actuator type, detection versus resolution strategies, and modularity. | The work closest to ours is @cite_16 , in which features interact by controlling the same object, resource, or environment variable such as air temperature. A feature can claim exclusive access to a resource for some period of time, some claims are not conflicting, and priority is used to resolve conflicting claims. It is difficult to make a closer comparison because their composition is not completely defined, and there is no formal semantics. Certainly the only kind of control considered in @cite_16 is automatic control as defined in , so there is no consideration of manual control or user-centric concepts. | {
"cite_N": [
"@cite_16"
],
"mid": [
"1603485079"
],
"abstract": [
"The feature or service interaction problem within home networks is an established topic for the FI community. Interactions between home appliances, services and their environment have been reported previously. Indeed earlier work by the authors introduced a device centric approach which detects undesirable behaviour between appliances and their effects on the environment. However this previous work did not address side-effects between components of the modelled environment. That is some appliances do not only affect the environment through their main function, but may do so also in other ways. For instance, an air conditioner cools the air as its main function, but also decreases the humidity as a side-effect. Here we extend our earlier approach to handle such side effects effectively and discuss previously unreported results."
]
} |
1510.06714 | 2233560704 | Many user studies of home automation, as the most familiar representative of the Internet of Things, have shown the difficulty of developing technology that users understand and like. It helps to state requirements as largely-independent features, but features are not truly independent, so this incurs the cost of managing and explaining feature interactions. We propose to compose features at runtime, resolving their interactions by means of priority. Although the basic idea is simple, its details must be designed to make users comfortable by balancing manual and automatic control. On the technical side, its details must be designed to allow meaningful separation of features and maximum generality. As evidence that our composition mechanism achieves its goals, we present three substantive examples of home automation, and the results of a user study to investigate comprehension of feature interactions. A survey of related work shows that this proposal occupies a sensible place in a design space whose dimensions include actuator type, detection versus resolution strategies, and modularity. | Some currently available platforms for the Internet of Things include Spitfire @cite_30 , Perla @cite_22 , and HomeOS @cite_29 @cite_13 . Our feature composition is compatible with the goals and capabilities of all of these, and could be implemented straightforwardly in any of them. | {
"cite_N": [
"@cite_30",
"@cite_29",
"@cite_13",
"@cite_22"
],
"mid": [
"",
"2001008003",
"1663109347",
"2080219315"
],
"abstract": [
"",
"Researchers who develop new home technologies using connected devices (e.g. sensors) often want to conduct large-scale field studies in homes to evaluate their technology, but conducting such studies today is quite challenging, if not impossible. Considerable custom engineering is required to ensure hardware and software prototypes work robustly, and recruiting and managing more than a handful of households can be difficult and cost-prohibitive. To lower the barrier to developing and evaluating new technologies for the home environment, we call for the development of a shared infrastructure, called HomeLab. HomeLab consists of a large number of geographically distributed households, each running a common, flexible framework (e.g., HomeOS [4]) in which experiments are implemented. The use of a common framework enables engineering effort, along with experience and expertise, to be shared among many research groups. Recruitment of households to HomeLab can be organic: as a research group recruits (a few) households to participate in its field study, these households can be invited to join HomeLab and participate in future studies conducted by other groups. As the pool of households participating in HomeLab grows, we hope that researchers will find it easier to recruit a large number of households to participate in field studies.",
"Network devices for the home such as remotely controllable locks, lights, thermostats, cameras, and motion sensors are now readily available and inexpensive. In theory, this enables scenarios like remotely monitoring cameras from a smartphone or customizing climate control based on occupancy patterns. However, in practice today, such smarthome scenarios are limited to expert hobbyists and the rich because of the high overhead of managing and extending current technology. We present HomeOS, a platform that bridges this gap by presenting users and developers with a PC-like abstraction for technology in the home. It presents network devices as peripherals with abstract interfaces, enables cross-device tasks via applications written against these interfaces, and gives users a management interface designed for the home environment. HomeOS already has tens of applications and supports a wide range of devices. It has been running in 12 real homes for 4-8 months, and 42 students have built new applications and added support for additional devices independent of our efforts.",
"A declarative SQL-like language and a middleware infrastructure are presented for collecting data from different nodes of a pervasive system. Data management is performed by hiding the complexity due to the large underlying heterogeneity of devices, which can span from passive RFID(s) to ad hoc sensor boards to portable computers. An important feature of the presented middleware is to make the integration of new device types in the system easy through the use of device self-description. Two case studies are described for PerLa usage, and a survey is made for comparing our approach with other projects in the area."
]
} |
1510.06121 | 1777099960 | Over the past decade, the bulk of wireless traffic has shifted from speech to content. This shift creates the opportunity to cache part of the content in memories closer to the end users, for example in base stations. Most of the prior literature focuses on the reduction of load in the backhaul and core networks due to caching, i.e., on the benefits caching offers for the wireline communication link between the origin server and the caches. In this paper, we are instead interested in the benefits caching can offer for the wireless communication link between the caches and the end users. To quantify the gains of caching for this wireless link, we consider an interference channel in which each transmitter is equipped with an isolated cache memory. Communication takes place in two phases, a content placement phase followed by a content delivery phase. The objective is to design both the placement and the delivery phases to maximize the rate in the delivery phase in response to any possible user demands. Focusing on the three-user case, we show that through careful joint design of these phases, we can reap three distinct benefits from caching: a load balancing gain, an interference cancellation gain, and an interference alignment gain. In our proposed scheme, load balancing is achieved through a specific file splitting and placement, producing a particular pattern of content overlap at the caches. This overlap allows to implement interference cancellation. Further, it allows us to create several virtual transmitters, each transmitting a part of the requested content, which increases interference-alignment possibilities. | Several works have investigated wireless networks with caches. Some papers focus on optimizing the content placement to increase local content delivery. @cite_6 , a cache-aided small-cell system is considered, and the cache placement is formulated as maximizing the weighted-sum of the probability of local delivery. The wireless channel is modeled by a connectivity graph. @cite_14 considers a related model but the content placement is optimized without a-priori knowledge of file popularities, which are learned from the demand history. | {
"cite_N": [
"@cite_14",
"@cite_6"
],
"mid": [
"2037169571",
"2949071729"
],
"abstract": [
"Optimal cache content placement in a wireless small cell base station (sBS) with limited backhaul capacity is studied. The sBS has a large cache memory and provides content-level selective offloading by delivering high data rate contents to users in its coverage area. The goal of the sBS content controller (CC) is to store the most popular contents in the sBS cache memory such that the maximum amount of data can be fetched directly form the sBS, not relying on the limited backhaul resources during peak traffic periods. If the popularity profile is known in advance, the problem reduces to a knapsack problem. However, it is assumed in this work that, the popularity profile of the files is not known by the CC, and it can only observe the instantaneous demand for the cached content. Hence, the cache content placement is optimised based on the demand history. By refreshing the cache content at regular time intervals, the CC tries to learn the popularity profile, while exploiting the limited cache capacity in the best way possible. Three algorithms are studied for this cache content placement problem, leading to different exploitation-exploration trade-offs. We provide extensive numerical simulations in order to study the time-evolution of these algorithms, and the impact of the system parameters, such as the number of files, the number of users, the cache size, and the skewness of the popularity profile, on the performance. It is shown that the proposed algorithms quickly learn the popularity profile for a wide range of system parameters.",
"Video on-demand streaming from Internet-based servers is becoming one of the most important services offered by wireless networks today. In order to improve the area spectral efficiency of video transmission in cellular systems, small cells heterogeneous architectures (e.g., femtocells, WiFi off-loading) are being proposed, such that video traffic to nomadic users can be handled by short-range links to the nearest small cell access points (referred to as \"helpers\"). As the helper deployment density increases, the backhaul capacity becomes the system bottleneck. In order to alleviate such bottleneck we propose a system where helpers with low-rate backhaul but high storage capacity cache popular video files. Files not available from helpers are transmitted by the cellular base station. We analyze the optimum way of assigning files to the helpers, in order to minimize the expected downloading time for files. We distinguish between the uncoded case (where only complete files are stored) and the coded case, where segments of Fountain-encoded versions of the video files are stored at helpers. We show that the uncoded optimum file assignment is NP-hard, and develop a greedy strategy that is provably within a factor 2 of the optimum. Further, for a special case we provide an efficient algorithm achieving a provably better approximation ratio of @math , where @math is the maximum number of helpers a user can be connected to. We also show that the coded optimum cache assignment problem is convex that can be further reduced to a linear program. We present numerical results comparing the proposed schemes."
]
} |
1510.06121 | 1777099960 | Over the past decade, the bulk of wireless traffic has shifted from speech to content. This shift creates the opportunity to cache part of the content in memories closer to the end users, for example in base stations. Most of the prior literature focuses on the reduction of load in the backhaul and core networks due to caching, i.e., on the benefits caching offers for the wireline communication link between the origin server and the caches. In this paper, we are instead interested in the benefits caching can offer for the wireless communication link between the caches and the end users. To quantify the gains of caching for this wireless link, we consider an interference channel in which each transmitter is equipped with an isolated cache memory. Communication takes place in two phases, a content placement phase followed by a content delivery phase. The objective is to design both the placement and the delivery phases to maximize the rate in the delivery phase in response to any possible user demands. Focusing on the three-user case, we show that through careful joint design of these phases, we can reap three distinct benefits from caching: a load balancing gain, an interference cancellation gain, and an interference alignment gain. In our proposed scheme, load balancing is achieved through a specific file splitting and placement, producing a particular pattern of content overlap at the caches. This overlap allows to implement interference cancellation. Further, it allows us to create several virtual transmitters, each transmitting a part of the requested content, which increases interference-alignment possibilities. | Other papers consider the wireless delivery as well. The load-balancing benefits of caching in ad-hoc wireless networks is investigated in @cite_5 . @cite_12 , content placement is optimized assuming a fixed capacity at each small cell, which is justified for orthogonal wireless multiple-access. @cite_4 focuses on device-to-device wireless networks, in which devices are equipped with a limited cache memory and may opportunistically download the requested file from caches of neighboring devices. @cite_20 , caching the same content at the base stations is suggested to improve the overall throughput by inducing cooperation among transmitters, and in @cite_21 the availability of common files at different devices is exploited to match transmitters with receivers such that treating interference as noise is approximately optimal. | {
"cite_N": [
"@cite_4",
"@cite_21",
"@cite_5",
"@cite_12",
"@cite_20"
],
"mid": [
"2964101002",
"2554704564",
"2550555189",
"2039090559",
"2038519345"
],
"abstract": [
"We consider a wireless device-to-device (D2D) network where the nodes have precached information from a library of available files. Nodes request files at random. If the requested file is not in the on-board cache, then it is downloaded from some neighboring node via one-hop local communication. An outage event occurs when a requested file is not found in the neighborhood of the requesting node, or if the network admission control policy decides not to serve the request. We characterize the optimal throughput-outage tradeoff in terms of tight scaling laws for various regimes of the system parameters, when both the number of nodes and the number of files in the library grow to infinity. Our analysis is based on Gupta and Kumar protocol model for the underlying D2D wireless network, widely used in the literature on capacity scaling laws of wireless networks without caching. Our results show that the combination of D2D spectrum reuse and caching at the user nodes yields a per-user throughput independent of the number of users, for any fixed outage probability in (0, 1). This implies that the D2D caching network is scalable: even though the number of users increases, each user achieves constant throughput. This behavior is very different from the classical Gupta and Kumar result on ad hoc wireless networks, for which the per-user throughput vanishes as the number of users increases. Furthermore, we show that the user throughput is directly proportional to the fraction of cached information over the whole file library size. Therefore, we can conclude that D2D caching networks can turn memory into bandwidth (i.e., doubling the on-board cache memory on the user devices yields a 100 increase of the user throughout).",
"In this paper, we study the impact of caching on a recently-proposed spectrum sharing mechanism for device-to-device networks, ITLinQ, which schedules communication based on information-theoretic optimality principles. We provide a lower bound on the spectral reuse (the fraction of users simultaneously scheduled in the ITLinQ framework) as a function of the caching redundancy by proposing a greedy algorithm for source-user association and determining its achievable reuse. In order to demonstrate that higher spectral reuse also results in more efficient communication, we numerically evaluate the rates achieved by our scheme and show that it provides substantial throughput gain over the state-of-the-art.",
"We consider the problem of delivering content cached in a wireless network of n nodes randomly located on a square of area n. The network performance is described by the 2n × n-dimensional caching capacity region of the wireless network. We provide an inner bound on this caching capacity region, and, in the high path-loss regime, a matching (in the scaling sense) outer bound. For large path-loss exponent, this provides an information-theoretic scaling characterization of the entire caching capacity region. The proposed communication scheme achieving the inner bound shows that the problems of cache selection and channel coding can be solved separately without loss of order-optimality. On the other hand, our results show that the common architecture of nearest-neighbor cache selection can be arbitrarily bad, implying that cache selection and load balancing need to be performed jointly.",
"Small cells constitute a promising solution for managing the mobile data growth that has overwhelmed network operators. Local caching of popular content items at the small cell base stations (SBSs) has been proposed to decrease the costly transmissions from the macrocell base stations without requiring high capacity backhaul links for connecting the SBSs with the core network. However, the caching policy design is a challenging problem especially if one considers realistic parameters such as the bandwidth capacity constraints of the SBSs that can be reached in congested urban areas. We consider such a scenario and formulate the joint routing and caching problem aiming to maximize the fraction of content requests served locally by the deployed SBSs. This is an NP-hard problem and, hence, we cannot obtain an optimal solution. Thus, we present a novel reduction to a variant of the facility location problem, which allows us to exploit the rich literature of it, to establish algorithms with approximation guarantees for our problem. Although the reduction does not ensure tight enough bounds in general, extensive numerical results reveal a near-optimal performance that is even up to 38 better compared to conventional caching schemes using realistic system settings.",
"We propose a novel MIMO cooperation framework called the cache-induced opportunistic cooperative MIMO (Coop-MIMO) for video streaming in backhaul limited multicell MIMO networks. By caching a portion of the video files, the base stations (BSs) opportunistically employ Coop-MIMO transmission to achieve MIMO cooperation gain without expensive payload backhaul. We derive closed form expressions of various video streaming performance metrics for cache-induced opportunistic Coop-MIMO and investigate the impact of BS level caching and key system parameters on the performance. Specifically, we first obtain a mixed fluid-diffusion limit for the playback buffer queueing system. Then we derive the approximated video streaming performance using the mixed fluid-diffusion limit. Based on the performance analysis, we formulate the joint optimization of cache control and playback buffer management as a stochastic optimization problem. Then we derive a closed form solution for the playback buffer thresholds and develop a stochastic subgradient algorithm to find the optimal cache control. The analysis shows that the video streaming performance improves linearly with the BS cache size BC, the transmit power cost decreases exponentially with BC, and the backhaul cost decreases linearly with BC."
]
} |
1510.06121 | 1777099960 | Over the past decade, the bulk of wireless traffic has shifted from speech to content. This shift creates the opportunity to cache part of the content in memories closer to the end users, for example in base stations. Most of the prior literature focuses on the reduction of load in the backhaul and core networks due to caching, i.e., on the benefits caching offers for the wireline communication link between the origin server and the caches. In this paper, we are instead interested in the benefits caching can offer for the wireless communication link between the caches and the end users. To quantify the gains of caching for this wireless link, we consider an interference channel in which each transmitter is equipped with an isolated cache memory. Communication takes place in two phases, a content placement phase followed by a content delivery phase. The objective is to design both the placement and the delivery phases to maximize the rate in the delivery phase in response to any possible user demands. Focusing on the three-user case, we show that through careful joint design of these phases, we can reap three distinct benefits from caching: a load balancing gain, an interference cancellation gain, and an interference alignment gain. In our proposed scheme, load balancing is achieved through a specific file splitting and placement, producing a particular pattern of content overlap at the caches. This overlap allows to implement interference cancellation. Further, it allows us to create several virtual transmitters, each transmitting a part of the requested content, which increases interference-alignment possibilities. | An information-theoretic framework for the analysis of cache-aided communication was introduced in @cite_17 in the context of broadcast channels. Unlike here, the caches there are placed at the receivers. It is shown in @cite_17 that in this setting the availability of caches allows the exploitation of a coded multicasting gain. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2106248279"
],
"abstract": [
"Caching is a technique to reduce peak traffic rates by prefetching popular content into memories at the end users. Conventionally, these memories are used to deliver requested content in part from a locally cached copy rather than through the network. The gain offered by this approach, which we term local caching gain, depends on the local cache size (i.e., the memory available at each individual user). In this paper, we introduce and exploit a second, global, caching gain not utilized by conventional caching schemes. This gain depends on the aggregate global cache size (i.e., the cumulative memory available at all users), even though there is no cooperation among the users. To evaluate and isolate these two gains, we introduce an information-theoretic formulation of the caching problem focusing on its basic structure. For this setting, we propose a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared with previously known schemes. In particular, the improvement can be on the order of the number of users in the network. In addition, we argue that the performance of the proposed scheme is within a constant factor of the information-theoretic optimum for all values of the problem parameters."
]
} |
1510.06024 | 2238788190 | Graph-regularized semi-supervised learning has been used effectively for classification when (i) instances are connected through a graph, and (ii) labeled data is scarce. If available, using multiple relations (or graphs) between the instances can improve the prediction performance. On the other hand, when these relations have varying levels of veracity and exhibit varying relevance for the task, very noisy and or irrelevant relations may deteriorate the performance. As a result, an effective weighing scheme needs to be put in place. In this work, we propose a robust and scalable approach for multi-relational graph-regularized semi-supervised classification. Under a convex optimization scheme, we simultaneously infer weights for the multiple graphs as well as a solution. We provide a careful analysis of the inferred weights, based on which we devise an algorithm that filters out irrelevant and noisy graphs and produces weights proportional to the informativeness of the remaining graphs. Moreover, the proposed method is linearly scalable w.r.t. the number of edges in the union of the multiple graphs. Through extensive experiments we show that our method yields superior results under different noise models, and under increasing number of noisy graphs and intensity of noise, as compared to a list of baselines and state-of-the-art approaches. | While most transductive methods consider a single given graph, there also exist methods that aim to predict the node labels by leveraging multiple graphs @cite_26 @cite_18 @cite_25 @cite_33 @cite_22 @cite_17 . The problem is sometimes referred as multiple kernel learning @cite_11 , where data is represented by means of kernel matrices defined by similarities between data pairs. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_11",
"@cite_33",
"@cite_22",
"@cite_25",
"@cite_17"
],
"mid": [
"2133026717",
"2161444669",
"2031823405",
"2124649441",
"2109524159",
"",
"2175022989"
],
"abstract": [
"A foundational problem in semi-supervised learning is the construction of a graph underlying the data. We propose to use a method which optimally combines a number of differently constructed graphs. For each of these graphs we associate a basic graph kernel. We then compute an optimal combined kernel. This kernel solves an extended regularization problem which requires a joint minimization over both the data and the set of graph kernels. We present encouraging results on different OCR tasks where the optimal combined kernel is computed from graphs constructed with a variety of distances functions and the 'k' in nearest neighbors.",
"Motivation: During the past decade, the new focus on genomics has highlighted a particular challenge: to integrate the different views of the genome that are provided by various types of experimental data. Results: This paper describes a computational framework for integrating and drawing inferences from a collection of genome-wide measurements. Each dataset is represented via a kernel function, which defines generalized similarity relationships between pairs of entities, such as genes or proteins. The kernel representation is both flexible and efficient, and can be applied to many different types of data. Furthermore, kernel functions derived from different types of data can be combined in a straightforward fashion. Recent advances in the theory of kernel methods have provided efficient algorithms to perform such combinations in a way that minimizes a statistical loss function. These methods exploit semidefinite programming techniques to reduce the problem of finding optimizing kernel combinations to a convex optimization problem. Computational experiments performed using yeast genome-wide datasets, including amino acid sequences, hydropathy profiles, gene expression data and known protein--protein interactions, demonstrate the utility of this approach. A statistical learning algorithm trained from all of these data to recognize particular classes of proteins---membrane proteins and ribosomal proteins---performs significantly better than the same algorithm trained on any single type of data. Availability: Supplementary data at http: noble.gs.washington.edu proj sdp-svm",
"While classical kernel-based classifiers are based on a single kernel, in practice it is often desirable to base classifiers on combinations of multiple kernels. (2004) considered conic combinations of kernel matrices for the support vector machine (SVM), and showed that the optimization of the coefficients of such a combination reduces to a convex optimization problem known as a quadratically-constrained quadratic program (QCQP). Unfortunately, current convex optimization toolboxes can solve this problem only for a small number of kernels and a small number of data points; moreover, the sequential minimal optimization (SMO) techniques that are essential in large-scale implementations of the SVM cannot be applied because the cost function is non-differentiable. We propose a novel dual formulation of the QCQP as a second-order cone programming problem, and show how to exploit the technique of Moreau-Yosida regularization to yield a formulation to which SMO techniques can be applied. We present experimental results that show that our SMO-based algorithm is significantly more efficient than the general-purpose interior point methods available in current optimization toolboxes.",
"Background: Most successful computational approaches for protein function prediction integrate multiple genomics and proteomics data sources to make inferences about the function of unknown proteins. The most accurate of these algorithms have long running times, making them unsuitable for real-time protein function prediction in large genomes. As a result, the predictions of these algorithms are stored in static databases that can easily become outdated. We propose a new algorithm, GeneMANIA, that is as accurate as the leading methods, while capable of predicting protein function in real-time. Results: We use a fast heuristic algorithm, derived from ridge regression, to integrate multiple functional association networks and predict gene function from a single process-specific network using label propagation. Our algorithm is efficient enough to be deployed on a modern webserver and is as accurate as, or more so than, the leading methods on the MouseFunc I benchmark and a new yeast function prediction benchmark; it is robust to redundant and irrelevant data and requires, on average, less than ten seconds of computation time on tasks from these benchmarks. Conclusion: GeneMANIA is fast enough to predict gene function on-the-fly while achieving stateof-the-art accuracy. A prototype version of a GeneMANIA-based webserver is available at http: morrislab.med.utoronto.ca prototype.",
"Transductive inference on graphs such as label propagation algorithms is receiving a lot of attention. In this paper, we address a label propagation problem on multiple networks and present a new algorithm that automatically integrates structure information brought in by multiple networks. The proposed method is robust in that irrelevant networks are automatically deemphasized, which is an advantage over Tsuda's approach (2005). We also show that the proposed algorithm can be interpreted as an expectation-maximization (EM) algorithm with a student-t prior. Finally, we demonstrate the usefulness of our method in protein function prediction and digit classification, and show analytically and experimentally that our algorithm is much more efficient than existing algorithms.",
"",
""
]
} |
1510.06024 | 2238788190 | Graph-regularized semi-supervised learning has been used effectively for classification when (i) instances are connected through a graph, and (ii) labeled data is scarce. If available, using multiple relations (or graphs) between the instances can improve the prediction performance. On the other hand, when these relations have varying levels of veracity and exhibit varying relevance for the task, very noisy and or irrelevant relations may deteriorate the performance. As a result, an effective weighing scheme needs to be put in place. In this work, we propose a robust and scalable approach for multi-relational graph-regularized semi-supervised classification. Under a convex optimization scheme, we simultaneously infer weights for the multiple graphs as well as a solution. We provide a careful analysis of the inferred weights, based on which we devise an algorithm that filters out irrelevant and noisy graphs and produces weights proportional to the informativeness of the remaining graphs. Moreover, the proposed method is linearly scalable w.r.t. the number of edges in the union of the multiple graphs. Through extensive experiments we show that our method yields superior results under different noise models, and under increasing number of noisy graphs and intensity of noise, as compared to a list of baselines and state-of-the-art approaches. | The SDP SVM method by Lanckriet al @cite_26 use a weighted sum of kernel matrices, and find the optimal weights within a semi-definite programming (SDP) framework. Argyriou al @cite_18 build on their method to instead combine Laplacian kernel matrices. Multiple kernel learning has also been used for applications such as protein function prediction @cite_23 and image classification and retrieval @cite_15 . The SDP SVM method however is cubic in data size and hence not scalable to large datasets. Moreover it requires valid (i.e., positive definite) kernel matrices as input. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_23",
"@cite_15"
],
"mid": [
"2133026717",
"2161444669",
"195996384",
"2062483254"
],
"abstract": [
"A foundational problem in semi-supervised learning is the construction of a graph underlying the data. We propose to use a method which optimally combines a number of differently constructed graphs. For each of these graphs we associate a basic graph kernel. We then compute an optimal combined kernel. This kernel solves an extended regularization problem which requires a joint minimization over both the data and the set of graph kernels. We present encouraging results on different OCR tasks where the optimal combined kernel is computed from graphs constructed with a variety of distances functions and the 'k' in nearest neighbors.",
"Motivation: During the past decade, the new focus on genomics has highlighted a particular challenge: to integrate the different views of the genome that are provided by various types of experimental data. Results: This paper describes a computational framework for integrating and drawing inferences from a collection of genome-wide measurements. Each dataset is represented via a kernel function, which defines generalized similarity relationships between pairs of entities, such as genes or proteins. The kernel representation is both flexible and efficient, and can be applied to many different types of data. Furthermore, kernel functions derived from different types of data can be combined in a straightforward fashion. Recent advances in the theory of kernel methods have provided efficient algorithms to perform such combinations in a way that minimizes a statistical loss function. These methods exploit semidefinite programming techniques to reduce the problem of finding optimizing kernel combinations to a convex optimization problem. Computational experiments performed using yeast genome-wide datasets, including amino acid sequences, hydropathy profiles, gene expression data and known protein--protein interactions, demonstrate the utility of this approach. A statistical learning algorithm trained from all of these data to recognize particular classes of proteins---membrane proteins and ribosomal proteins---performs significantly better than the same algorithm trained on any single type of data. Availability: Supplementary data at http: noble.gs.washington.edu proj sdp-svm",
"Determining protein function constitutes an exercise in integrating information derived from several heterogeneous high-throughput experiments. To utilize the information spread across multiple sources in a combined fashion, these data sources are transformed into kernels. Several protein function prediction methods follow a two-phased approach: they first optimize the weights on individual kernels to produce a composite kernel, and then train a classifier on the composite kernel. As such, these methods result in an optimal composite kernel, but not necessarily in an optimal classifier. On the other hand, some methods optimize the loss of binary classifiers, and learn weights for the different kernels iteratively. A protein has multiple functions, and each function can be viewed as a label. These methods solve the problem of optimizing weights on the input kernels for each of the labels. This is computationally expensive and ignores inter-label correlations. In this paper, we propose a method called Protein Function Prediction by Integrating Multiple Kernels (ProMK). ProMK iteratively optimizes the phases of learning optimal weights and reducing the empirical loss of a multi-label classifier for each of the labels simultaneously, using a combined objective function. ProMK can assign larger weights to smooth kernels and downgrade the weights on noisy kernels. We evaluate the ability of ProMK to predict the function of proteins using several standard benchmarks. We show that our approach performs better than previously proposed protein function prediction approaches that integrate data from multiple networks, and multi-label multiple kernel learning methods.",
"For large scale image data mining, a challenging problem is to design a method that could work efficiently under the situation of little ground-truth annotation and a mass of unlabeled or noisy data. As one of the major solutions, semi-supervised learning (SSL) has been deeply investigated and widely used in image classification, ranking and retrieval. However, most SSL approaches are not able to incorporate multiple information sources. Furthermore, no sample selection is done on unlabeled data, leading to the unpredictable risk brought by uncontrolled unlabeled data and heavy computational burden that is not suitable for learning on real world dataset. In this paper, we propose a scalable semi-supervised multiple kernel learning method (S3MKL) to deal with the first problem. Our method imposes group LASSO regularization on the kernel coefficients to avoid over-fitting and conditional expectation consensus for regularizing the behaviors of different kernel on the unlabeled data. To reduce the risk of using unlabeled data, we also design a hashing system where multiple kernel locality sensitive hashing (MKLSH) are constructed with respect to different kernels to identify a set of \"informative\" and \"compact\" unlabeled training subset from a large unlabeled data corpus. Combining S3MKL with MKLSH, the method is suitable for real world image classification and personalized web image re-ranking with very little user interaction. Comprehensive experiments are conducted to test the performance of our method, and the results show that our method provides promising powers for large scale real world image classification and retrieval."
]
} |
1510.06024 | 2238788190 | Graph-regularized semi-supervised learning has been used effectively for classification when (i) instances are connected through a graph, and (ii) labeled data is scarce. If available, using multiple relations (or graphs) between the instances can improve the prediction performance. On the other hand, when these relations have varying levels of veracity and exhibit varying relevance for the task, very noisy and or irrelevant relations may deteriorate the performance. As a result, an effective weighing scheme needs to be put in place. In this work, we propose a robust and scalable approach for multi-relational graph-regularized semi-supervised classification. Under a convex optimization scheme, we simultaneously infer weights for the multiple graphs as well as a solution. We provide a careful analysis of the inferred weights, based on which we devise an algorithm that filters out irrelevant and noisy graphs and produces weights proportional to the informativeness of the remaining graphs. Moreover, the proposed method is linearly scalable w.r.t. the number of edges in the union of the multiple graphs. Through extensive experiments we show that our method yields superior results under different noise models, and under increasing number of noisy graphs and intensity of noise, as compared to a list of baselines and state-of-the-art approaches. | Mostafavi al @cite_33 @cite_29 construct the optimal composite graph by solving a linear regression problem using labeled data, later also used by Luo al @cite_37 . Different from most semi-supervised transductive methods that leverage both labeled and unlabeled data, these use only the labeled data to estimate the graph weights. | {
"cite_N": [
"@cite_37",
"@cite_29",
"@cite_33"
],
"mid": [
"",
"2120688086",
"2124649441"
],
"abstract": [
"",
"Motivation: Many algorithms that integrate multiple functional association networks for predicting gene function construct a composite network as a weighted sum of the individual networks and then use the composite network to predict gene function. The weight assigned to an individual network represents the usefulness of that network in predicting a given gene function. However, because many categories of gene function have a small number of annotations, the process of assigning these network weights is prone to overfitting. Results: Here, we address this problem by proposing a novel approach to combining multiple functional association networks. In particular, we present a method where network weights are simultaneously optimized on sets of related function categories. The method is simpler and faster than existing approaches. Further, we show that it produces composite networks with improved function prediction accuracy using five example species (yeast, mouse, fly, Esherichia coli and human). Availability: Networks and code are available from: http: morrislab.med.utoronto.ca sara SW Contact:smostafavi@cs.toronto.edu; quaid.morris@utoronto.ca Supplementary information:Supplementary data are available at Bioinformatics online.",
"Background: Most successful computational approaches for protein function prediction integrate multiple genomics and proteomics data sources to make inferences about the function of unknown proteins. The most accurate of these algorithms have long running times, making them unsuitable for real-time protein function prediction in large genomes. As a result, the predictions of these algorithms are stored in static databases that can easily become outdated. We propose a new algorithm, GeneMANIA, that is as accurate as the leading methods, while capable of predicting protein function in real-time. Results: We use a fast heuristic algorithm, derived from ridge regression, to integrate multiple functional association networks and predict gene function from a single process-specific network using label propagation. Our algorithm is efficient enough to be deployed on a modern webserver and is as accurate as, or more so than, the leading methods on the MouseFunc I benchmark and a new yeast function prediction benchmark; it is robust to redundant and irrelevant data and requires, on average, less than ten seconds of computation time on tasks from these benchmarks. Conclusion: GeneMANIA is fast enough to predict gene function on-the-fly while achieving stateof-the-art accuracy. A prototype version of a GeneMANIA-based webserver is available at http: morrislab.med.utoronto.ca prototype."
]
} |
1510.06024 | 2238788190 | Graph-regularized semi-supervised learning has been used effectively for classification when (i) instances are connected through a graph, and (ii) labeled data is scarce. If available, using multiple relations (or graphs) between the instances can improve the prediction performance. On the other hand, when these relations have varying levels of veracity and exhibit varying relevance for the task, very noisy and or irrelevant relations may deteriorate the performance. As a result, an effective weighing scheme needs to be put in place. In this work, we propose a robust and scalable approach for multi-relational graph-regularized semi-supervised classification. Under a convex optimization scheme, we simultaneously infer weights for the multiple graphs as well as a solution. We provide a careful analysis of the inferred weights, based on which we devise an algorithm that filters out irrelevant and noisy graphs and produces weights proportional to the informativeness of the remaining graphs. Moreover, the proposed method is linearly scalable w.r.t. the number of edges in the union of the multiple graphs. Through extensive experiments we show that our method yields superior results under different noise models, and under increasing number of noisy graphs and intensity of noise, as compared to a list of baselines and state-of-the-art approaches. | Different from the alternating optimization approaches in @cite_22 @cite_17 @cite_16 , we estimate the weights directly by solving a single optimization problem. Moreover, we iteratively discard the irrelevant graphs based on a simulated annealing approach guided by the cross-validation performance to prevent them from deteriorating the results. Through weeding out the intrusive graphs and learning globally optimal graph weights we produce the most robust results. | {
"cite_N": [
"@cite_16",
"@cite_22",
"@cite_17"
],
"mid": [
"2396185461",
"2109524159",
"2175022989"
],
"abstract": [
"A number of real-world networks are heterogeneous information networks, which are composed of different types of nodes and links. Numerical prediction in heterogeneous information networks is a challenging but significant area because network based information for unlabeled objects is usually limited to make precise estimations. In this paper, we consider a graph regularized meta-path based transductive regression model (Grempt), which combines the principal philosophies of typical graph-based transductive classification methods and transductive regression models designed for homogeneous networks. The computation of our method is time and space efficient and the precision of our model can be verified by numerical experiments.",
"Transductive inference on graphs such as label propagation algorithms is receiving a lot of attention. In this paper, we address a label propagation problem on multiple networks and present a new algorithm that automatically integrates structure information brought in by multiple networks. The proposed method is robust in that irrelevant networks are automatically deemphasized, which is an advantage over Tsuda's approach (2005). We also show that the proposed algorithm can be interpreted as an expectation-maximization (EM) algorithm with a student-t prior. Finally, we demonstrate the usefulness of our method in protein function prediction and digit classification, and show analytically and experimentally that our algorithm is much more efficient than existing algorithms.",
""
]
} |
1510.06223 | 1756034034 | In this work, we propose a regression method to predict the popularity of an online video measured by its number of views. Our method uses Support Vector Regression with Gaussian radial basis functions. We show that predicting popularity patterns with this approach provides more precise and more stable prediction results, mainly thanks to the nonlinear character of the proposed method as well as its robustness. We prove the superiority of our method against the state of the art using datasets containing almost 24 000 videos from YouTube and Facebook. We also show that using visual features, such as the outputs of deep neural networks or scene dynamics’ metrics, can be useful for popularity prediction before content publication. Furthermore, we show that popularity prediction accuracy can be improved by combining early distribution patterns with social and visual features and that social features represent a much stronger signal in terms of video popularity prediction than the visual ones. | Due to the enormous growth of the number of Internet users and online data available, popularity prediction of online content has received a lot of attention from the research community. Early works have focused on user web-access patterns @cite_31 and more specifically on the distribution of the video content @cite_21 , as it accounted for a significant portion of the Internet traffic and the findings could be used to determine the benefits of caching. Once the general access patterns were understood, the attention of the research community shifted to the actual popularity prediction of various content types. | {
"cite_N": [
"@cite_31",
"@cite_21"
],
"mid": [
"2144684817",
"1530535197"
],
"abstract": [
"The authors propose models for both temporal and spatial locality of reference in streams of requests arriving at Web servers. They show that simple models based on document popularity alone are insufficient for capturing either temporal or spatial locality. Instead, they rely on an equivalent, but numerical, representation of a reference stream: a stack distance trace. They show that temporal locality can be characterized by the marginal distribution of the stack distance trace, and propose models for typical distributions and compare their cache performance to the traces. They also show that spatial locality in a reference stream can be characterized using the notion of self-similarity. Self-similarity describes long-range correlations in the data set, which is a property that previous researchers have found hard to incorporate into synthetic reference strings. They show that stack distance strings appear to be strongly self-similar, and provide measurements of the degree of self-similarity in the traces. Finally, they discuss methods for generating synthetic Web traces that exhibit the properties of temporal and spatial locality measured in the data.",
"The increasing availability of continuous-media data is provoking a significant change in Internet workloads. For example, video from news, sports, and entertainment sites, and audio from Internet broadcast radio, telephony, and peer-to-peer networks, are becoming commonplace. Compared with traditional Web workloads, multimedia objects can require significantly more storage and transmission bandwidth. As a result, performance optimizations such as streaming-media proxy caches and multicast delivery are attractive for minimizing the impact of streaming-media workloads on the Internet. However, because few studies of streaming-media workloads exist, the extent to which such mechanisms will improve performance is unclear. This paper (1) presents and analyzes a client-based streaming-media workload generated by a large organization, (2) compares media workload characteristics to traditional Web-object workloads, and (3) explores the effectiveness of performance optimizations on streaming-media workloads. To perform the study, we collected traces of streaming-media sessions initiated by clients from a large university to servers in the Internet. In the week-long trace used for this paper, we monitored and analyzed RTSP sessions from 4,786 clients accessing 23,738 distinct streaming-media objects from 866 servers. Our analysis of this trace provides a detailed characterization of streaming-media for this workload."
]
} |
1510.06223 | 1756034034 | In this work, we propose a regression method to predict the popularity of an online video measured by its number of views. Our method uses Support Vector Regression with Gaussian radial basis functions. We show that predicting popularity patterns with this approach provides more precise and more stable prediction results, mainly thanks to the nonlinear character of the proposed method as well as its robustness. We prove the superiority of our method against the state of the art using datasets containing almost 24 000 videos from YouTube and Facebook. We also show that using visual features, such as the outputs of deep neural networks or scene dynamics’ metrics, can be useful for popularity prediction before content publication. Furthermore, we show that popularity prediction accuracy can be improved by combining early distribution patterns with social and visual features and that social features represent a much stronger signal in terms of video popularity prediction than the visual ones. | Textual content, such as Twitter messages, Digg stories or online news, is typically distributed very fast and catches users' attention for a relatively short period of time @cite_22 . Its popularity, measured in number of user actions such as comments, re-tweets or likes, is therefore highly skewed and can be modelled, e.g. with log-normal distribution @cite_25 . Video content exhibits similar heavy-tailed distribution, while its popularity is typically measured by the number of views @cite_36 . The availability of the video content and related popularity data via the YouTube platform, where every minute over 100 hours of video is uploaded @cite_36 , researchers were able to investigate other aspects related to the video content distribution. The most representative topics include prediction of the peak popularity time of the video @cite_13 or identifying popularity evolution patterns @cite_11 . However, most if not all methods used to predict the popularity of a given video rely on its early evolution pattern @cite_32 @cite_24 @cite_0 or its social context @cite_4 . Contrary to the method proposed in this paper, they do not exploit additional visual cues to improve their prediction accuracy. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_36",
"@cite_32",
"@cite_24",
"@cite_0",
"@cite_13",
"@cite_25",
"@cite_11"
],
"mid": [
"2090481988",
"1995358441",
"2062261051",
"2070366435",
"2129419521",
"2037970810",
"2087398646",
"2134742119",
"2042034885"
],
"abstract": [
"This paper presents a systematic online prediction method (Social-Forecast) that is capable to accurately forecast the popularity of videos promoted by social media. Social-Forecast explicitly considers the dynamically changing and evolving propagation patterns of videos in social media when making popularity forecasts, thereby being situation and context aware. Social-Forecast aims to maximize the forecast reward, which is defined as a tradeoff between the popularity prediction accuracy and the timeliness with which a prediction is issued. The forecasting is performed online and requires no training phase or a priori knowledge. We analytically bound the prediction performance loss of Social-Forecast as compared to that obtained by an omniscient oracle and prove that the bound is sublinear in the number of video arrivals, thereby guaranteeing its short-term performance as well as its asymptotic convergence to the optimal performance. In addition, we conduct extensive experiments using real-world data traces collected from the videos shared in RenRen, one of the largest online social networks in China. These experiments show that our proposed method outperforms existing view-based approaches for popularity prediction (which are not context-aware) by more than 30 in terms of prediction rewards.",
"This paper presents a study of the life cycle of news articles posted online. We describe the interplay between website visitation patterns and social media reactions to news content. We show that we can use this hybrid observation method to characterize distinct classes of articles. We also find that social media reactions can help predict future visitation patterns early and accurately. We validate our methods using qualitative analysis as well as quantitative analysis on data from a large international news network, for a set of articles generating more than 3,000,000 visits and 200,000 social media reactions. We show that it is possible to model accurately the overall traffic articles will ultimately receive by observing the first ten to twenty minutes of social media reactions. Achieving the same prediction accuracy with visits alone would require to wait for three hours of data. We also describe significant improvements on the accuracy of the early prediction of shelf-life for news stories.",
"Social media platforms have democratized the process of web content creation allowing mere consumers to become creators and distributors of content. But this has also contributed to an explosive growth of information and has intensified the online competition for users attention, since only a small number of items become popular while the rest remain unknown. Understanding what makes one item more popular than another, observing its popularity dynamics, and being able to predict its popularity has thus attracted a lot of interest in the past few years. Predicting the popularity of web content is useful in many areas such as network dimensioning (e.g., caching and replication), online marketing (e.g., recommendation systems and media advertising), or real-world outcome prediction (e.g., economical trends). In this survey, we review the current findings on web content popularity prediction. We describe the different popularity prediction models, present the features that have shown good predictive capabilities, and reveal factors known to influence web content popularity.",
"We present a method for accurately predicting the long time popularity of online content from early measurements of user's access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors.",
"This paper develops a framework for studying the popularity dynamics of user-generated videos, presents a characterization of the popularity dynamics, and proposes a model that captures the key properties of these dynamics. We illustrate the biases that may be introduced in the analysis for some choices of the sampling technique used for collecting data; however, sampling from recently-uploaded videos provides a dataset that is seemingly unbiased. Using a dataset that tracks the views to a sample of recently-uploaded YouTube videos over the first eight months of their lifetime, we study the popularity dynamics. We find that the relative popularities of the videos within our dataset are highly non-stationary, owing primarily to large differences in the required time since upload until peak popularity is finally achieved, and secondly to popularity oscillation. We propose a model that can accurately capture the popularity dynamics of collections of recently-uploaded videos as they age, including key measures such as hot set churn statistics, and the evolution of the viewing rate and total views distributions over time.",
"Predicting Web content popularity is an important task for supporting the design and evaluation of a wide range of systems, from targeted advertising to effective search and recommendation services. We here present two simple models for predicting the future popularity of Web content based on historical information given by early popularity measures. Our approach is validated on datasets consisting of videos from the widely used YouTube video-sharing portal. Our experimental results show that, compared to a state-of-the-art baseline model, our proposed models lead to significant decreases in relative squared errors, reaching up to 20 reduction on average, and larger reductions (of up to 71 ) for videos that experience a high peak in popularity in their early days followed by a sharp decrease in popularity.",
"Viral videos that gain popularity through the process of Internet sharing are having a profound impact on society. Existing studies on viral videos have only been on small or confidential datasets. We collect by far the largest open benchmark for viral video study called CMU Viral Video Dataset, and share it with researchers from both academia and industry. Having verified existing observations on the dataset, we discover some interesting characteristics of viral videos. Based on our analysis, in the second half of the paper, we propose a model to forecast the future peak day of viral videos. The application of our work is not only important for advertising agencies to plan advertising campaigns and estimate costs, but also for companies to be able to quickly respond to rivals in viral marketing campaigns. The proposed method is unique in that it is the first attempt to incorporate video metadata into the peak day prediction. The empirical results demonstrate that the proposed method outperforms the state-of-the-art methods, with statistically significant differences.",
"Online news agents provide commenting facilities for their readers to express their opinions or sentiments with regards to news stories. The number of user supplied comments on a news article may be indicative of its importance, interestingness, or impact. We explore the news comments space, and compare the log-normal and the negative binomial distributions for modeling comments from various news agents. These estimated models can be used to normalize raw comment counts and enable comparison across different news sites. We also examine the feasibility of online prediction of the number of comments, based on the volume observed shortly after publication. We report on solid performance for predicting news comment volume in the long run, after short observation. This prediction can be useful for identifying news stories with the potential to “take off,” and can be used to support front page optimization for news sites.",
"We study the relaxation response of a social system after endogenous and exogenous bursts of activity using the time series of daily views for nearly 5 million videos on YouTube. We find that most activity can be described accurately as a Poisson process. However, we also find hundreds of thousands of examples in which a burst of activity is followed by an ubiquitous power-law relaxation governing the timing of views. We find that these relaxation exponents cluster into three distinct classes and allow for the classification of collective human dynamics. This is consistent with an epidemic model on a social network containing two ingredients: a power-law distribution of waiting times between cause and action and an epidemic cascade of actions becoming the cause of future actions. This model is a conceptual extension of the fluctuation-dissipation theorem to social systems [Ruelle, D (2004) Phys Today 57:48–53] and [Roehner BM, et al, (2004) Int J Mod Phys C 15:809–834], and provides a unique framework for the investigation of timing in complex systems."
]
} |
1510.06223 | 1756034034 | In this work, we propose a regression method to predict the popularity of an online video measured by its number of views. Our method uses Support Vector Regression with Gaussian radial basis functions. We show that predicting popularity patterns with this approach provides more precise and more stable prediction results, mainly thanks to the nonlinear character of the proposed method as well as its robustness. We prove the superiority of our method against the state of the art using datasets containing almost 24 000 videos from YouTube and Facebook. We also show that using visual features, such as the outputs of deep neural networks or scene dynamics’ metrics, can be useful for popularity prediction before content publication. Furthermore, we show that popularity prediction accuracy can be improved by combining early distribution patterns with social and visual features and that social features represent a much stronger signal in terms of video popularity prediction than the visual ones. | In particular, Szabo and Huberman @cite_32 observe a log-linear relationship between the views of the YouTube videos at early stages after the publication and later times. The reported Pearson correlation coefficient between the log-transformed number of views after seven and thirty days after publication exceeds 0.9, which suggests that the more popular submission is at the beginning, the more popular it will become later. | {
"cite_N": [
"@cite_32"
],
"mid": [
"2070366435"
],
"abstract": [
"We present a method for accurately predicting the long time popularity of online content from early measurements of user's access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors."
]
} |
1510.06223 | 1756034034 | In this work, we propose a regression method to predict the popularity of an online video measured by its number of views. Our method uses Support Vector Regression with Gaussian radial basis functions. We show that predicting popularity patterns with this approach provides more precise and more stable prediction results, mainly thanks to the nonlinear character of the proposed method as well as its robustness. We prove the superiority of our method against the state of the art using datasets containing almost 24 000 videos from YouTube and Facebook. We also show that using visual features, such as the outputs of deep neural networks or scene dynamics’ metrics, can be useful for popularity prediction before content publication. Furthermore, we show that popularity prediction accuracy can be improved by combining early distribution patterns with social and visual features and that social features represent a much stronger signal in terms of video popularity prediction than the visual ones. | Several recent works @cite_1 @cite_30 have also tackled the problem of image popularity in social media from a temporal perspective. Exploiting the popularity patterns and trends, proposed estimating popularity based on multi-scale analysis of the dependencies between user, time and item represented in Flickr pictures. | {
"cite_N": [
"@cite_30",
"@cite_1"
],
"mid": [
"2963759277",
"2472257696"
],
"abstract": [
"The evolution of social media popularity exhibits rich temporality, i.e., popularities change over time at various levels of temporal granularity. This is influenced by temporal variations of public attentions or user activities. For example, popularity patterns of street snap on Flickr are observed to depict distinctive fashion styles at specific time scales, such as season-based periodic fluctuations for Trench Coat or one-off peak in days for Evening Dress. However, this fact is often overlooked by existing research of popularity modeling. We present the first study to incorporate multiple time-scale dynamics into predicting online popularity. We propose a novel computational framework in the paper, named Multi-scale Temporalization, for estimating popularity based on multi-scale decomposition and structural reconstruction in a tensor space of user, post, and time by joint low-rank constraints. By considering the noise caused by context inconsistency, we design a data rearrangement step based on context aggregation as preprocessing to enhance contextual relevance of neighboring data in the tensor space. As a result, our approach can leverage multiple levels of temporal characteristics and reduce the noise of data decomposition to improve modeling effectiveness. We evaluate our approach on two large-scale Flickr image datasets with over 1.8 million photos in total, for the task of popularity prediction. The results show that our approach significantly outperforms state-of-the-art popularity prediction techniques, with a relative improvement of 10.9 -47.5 in terms of prediction accuracy.",
"Time information plays a crucial role on social media popularity. Existing research on popularity prediction, effective though, ignores temporal information which is highly related to user-item associations and thus often results in limited success. An essential way is to consider all these factors (user, item, and time), which capture the dynamic nature of photo popularity. In this paper, we present a novel approach to factorize the popularity into user-item context and time-sensitive context for exploring the mechanism of dynamic popularity. The user-item context provides a holistic view of popularity, while the time-sensitive context captures the temporal dynamics nature of popularity. Accordingly, we develop two kinds of time-sensitive features, including user activeness variability and photo prevalence variability. To predict photo popularity, we propose a novel framework named Multi-scale Temporal Decomposition (MTD), which decomposes the popularity matrix in latent spaces based on contextual associations. Specifically, the proposed MTD models time-sensitive context on different time scales, which is beneficial to automatically learn temporal patterns. Based on the experiments conducted on a real-world dataset with 1.29M photos from Flickr, our proposed MTD can achieve the prediction accuracy of 79.8 and outperform the best three state-of-the-art methods with a relative improvement of 9.6 on average."
]
} |
1510.06223 | 1756034034 | In this work, we propose a regression method to predict the popularity of an online video measured by its number of views. Our method uses Support Vector Regression with Gaussian radial basis functions. We show that predicting popularity patterns with this approach provides more precise and more stable prediction results, mainly thanks to the nonlinear character of the proposed method as well as its robustness. We prove the superiority of our method against the state of the art using datasets containing almost 24 000 videos from YouTube and Facebook. We also show that using visual features, such as the outputs of deep neural networks or scene dynamics’ metrics, can be useful for popularity prediction before content publication. Furthermore, we show that popularity prediction accuracy can be improved by combining early distribution patterns with social and visual features and that social features represent a much stronger signal in terms of video popularity prediction than the visual ones. | We build on these works by proposing a popularity prediction method for social media videos . We use computer vision algorithms to calculate visual features and verify if combining it with early evolution data can improve prediction accuracy for videos published online. Although recent works have also addressed the problem of online video analysis @cite_3 and popularity prediction @cite_28 from a multi-modal perspective, their focus is on micro-videos that last not more than a few seconds, while we consider longer videos . To the best of our knowledge, this is one of the first attempts to use this kind of features in the context of online video popularity prediction. | {
"cite_N": [
"@cite_28",
"@cite_3"
],
"mid": [
"2524482439",
"2527280178"
],
"abstract": [
"Micro-videos, a new form of user generated contents (UGCs), are gaining increasing enthusiasm. Popular micro-videos have enormous commercial potential in many ways, such as online marketing and brand tracking. In fact, the popularity prediction of traditional UGCs including tweets, web images, and long videos, has achieved good theoretical underpinnings and great practical success. However, little research has thus far been conducted to predict the popularity of the bite-sized videos. This task is non-trivial due to three reasons: 1) micro-videos are short in duration and of low quality; 2) they can be described by multiple heterogeneous channels, spanning from social, visual, acoustic to textual modalities; and 3) there are no available benchmark dataset and discriminant features that are suitable for this task. Towards this end, we present a transductive multi-modal learning model. The proposed model is designed to find the optimal latent common space, unifying and preserving information from different modalities, whereby micro-videos can be better represented. This latent space can be used to alleviate the information insufficiency problem caused by the brief nature of micro-videos. In addition, we built a benchmark dataset and extracted a rich set of popularity-oriented features to characterize the popular micro-videos. Extensive experiments have demonstrated the effectiveness of the proposed model. As a side contribution, we have released the dataset, codes and parameters to facilitate other researchers.",
"According to our statistics on over 2 million micro-videos, only 1.22 of them are associated with venue information, which greatly hinders the location-oriented applications and personalized services. To alleviate this problem, we aim to label the bite-sized video clips with venue categories. It is, however, nontrivial due to three reasons: 1) no available benchmark dataset; 2) insufficient information, low quality, and 3) information loss; and 3) complex relatedness among venue categories. Towards this end, we propose a scheme comprising of two components. In particular, we first crawl a representative set of micro-videos from Vine and extract a rich set of features from textual, visual and acoustic modalities. We then, in the second component, build a tree-guided multi-task multi-modal learning model to estimate the venue category for each unseen micro-video. This model is able to jointly learn a common space from multi-modalities and leverage the predefined Foursquare hierarchical structure to regularize the relatedness among venue categories. Extensive experiments have well-validated our model. As a side research contribution, we have released our data, codes and involved parameters."
]
} |
1510.05879 | 2231633557 | We use Latent-Dynamic Conditional Random Fields to perform skeleton-based pointing gesture classification at each time instance of a video sequence, where we achieve a frame-wise pointing accuracy of roughly 83 . Subsequently, we determine continuous time sequences of arbitrary length that form individual pointing gestures and this way reliably detect pointing gestures at a false positive detection rate of 0.63 . | Gesture recognition for HCI has been an active research areas for many years. Accordingly, there exist several surveys that provide a detailed overview of early and recent research developments @cite_1 @cite_26 @cite_23 @cite_12 @cite_6 @cite_4 . In the following, we focus on depth-based sequential gesture recognition approaches and, for example, do not discuss non-sequential approaches ( , @cite_10 @cite_17 @cite_25 @cite_9 @cite_30 @cite_28 ) | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_4",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_23",
"@cite_10",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"2072028093",
"2168392347",
"",
"2110551327",
"2134538062",
"2094645604",
"2126698653",
"2128859583",
"2095670665",
"2116362072",
"",
""
],
"abstract": [
"Gesture recognition is essential for human — machine interaction. In this paper we propose a method to recognize human gestures using a Kinect® depth camera. The camera views the subject in the front plane and generates a depth image of the subject in the plane towards the camera. This depth image is then used for background removal, followed by generation of the depth profile of the subject. In addition to this, the difference between subsequent frames gives the motion profile of the subject and is used for recognition of gestures. These allow the efficient use of depth camera to successfully recognize multiple human gestures. The result of a case study involving 8 gestures is shown. The system was trained using a multi class Support Vector Machine.",
"Gesture recognition pertains to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and or body. It is of utmost importance in designing an intelligent and efficient human-computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper, we provide a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail. Existing challenges and future research possibilities are also highlighted",
"",
"An objective of natural Human-Robot Interaction (HRI) is to enable humans to communicate with robots in the same manner humans do between themselves. This includes the use of natural gestures to support and expand the information that is exchanged in the spoken language. To achieve that, robots need robust gesture recognition systems to detect the non-verbal information that is sent to them by the human gestures. Traditional gesture recognition systems highly depend on the light conditions and often require a training process before they can be used. We have integrated a low-cost commercial RGB-D (Red Green Blue - Depth) sensor in a social robot to allow it to recognise dynamic gestures by tracking a skeleton model of the subject and coding the temporal signature of the gestures in a FSM (Finite State Machine). The vision system is independent of low light conditions and does not require a training process.",
"This paper describes a depth image based real-time skeleton fitting algorithm for the hand, using an object recognition by parts approach, and the use of this hand modeler in an American Sign Language (ASL) digit recognition application. In particular, we created a realistic 3D hand model that represents the hand with 21 different parts. Random decision forests (RDF) are trained on synthetic depth images generated by animating the hand model, which are then used to perform per pixel classification and assign each pixel to a hand part. The classification results are fed into a local mode finding algorithm to estimate the joint locations for the hand skeleton. The system can process depth images retrieved from Kinect in real-time at 30 fps. As an application of the system, we also describe a support vector machine (SVM) based recognition module for the ten digits of ASL based on our method, which attains a recognition rate of 99.9 on live depth images in real-time1.",
"This paper presents a literature review on the use of depth for hand tracking and gesture recognition. The survey examines 37 papers describing depth-based gesture recognition systems in terms of (1) the hand localization and gesture classification methods developed and used, (2) the applications where gesture recognition has been tested, and (3) the effects of the low-cost Kinect and OpenNI software libraries on gesture recognition research. The survey is organized around a novel model of the hand gesture recognition process. In the reviewed literature, 13 methods were found for hand localization and 11 were found for gesture classification. 24 of the papers included real-world applications to test a gesture recognition system, but only 8 application categories were found (and three applications accounted for 18 of the papers). The papers that use the Kinect and the OpenNI libraries for hand tracking tend to focus more on applications than on localization and classification methods, and show that the OpenNI hand tracking method is good enough for the applications tested thus far. However, the limitations of the Kinect and other depth sensors for gesture recognition have yet to be tested in challenging applications and environments.",
"The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient \"purposive\" approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of human-computer interaction.",
"Body posture and finger pointing are a natural modality for human-machine interaction, but first the system must know what it's seeing.",
"We describe a real-time system for detecting pointing gestures and estimating the direction of pointing using stereo cameras. Previously, similar systems were implemented using color-based blob trackers, which relied on effective skin color detection; this approach is sensitive to lighting changes and the clothing worn by the user. In contrast, we used a stereo system that produces dense disparity maps in real-time. Disparity maps are considerably less sensitive to lighting changes. Our system subtracts the background, analyzes the foreground pixels to break the body into parts using a robust mixture model, and estimates the direction of pointing. We have tested the system on both coarse and fine pointing by selecting the targets in a room and controlling the cursor on a wall screen, respectively.",
"This paper implements a real-time hand gesture recognition algorithm based on the inexpensive Kinect sensor. The use of a depth sensor allows for complex 3D gestures where the system is robust to disturbing objects or persons in the background. A Haarlet-based hand gesture recognition system is implemented to detect hand gestures in any orientation, and more in particular pointing gestures while extracting the 3D pointing direction. The system is integrated on an interactive robot (based on ROS), allowing for real-time hand gesture interaction with the robot. Pointing gestures are translated into goals for the robot, telling him where to go. A demo scenario is presented where the robot looks for persons to interact with, asks for directions, and then detects a 3D pointing direction. The robot then explores his vicinity in the given direction and looks for a new person to interact with.",
"",
""
]
} |
1510.05879 | 2231633557 | We use Latent-Dynamic Conditional Random Fields to perform skeleton-based pointing gesture classification at each time instance of a video sequence, where we achieve a frame-wise pointing accuracy of roughly 83 . Subsequently, we determine continuous time sequences of arbitrary length that form individual pointing gestures and this way reliably detect pointing gestures at a false positive detection rate of 0.63 . | The need for a richer representation of observations ( , with overlapping features) has led to the development of MEMM @cite_14 . A MEMM is a model for sequence labeling that combines features of HMM and Maximum Entropy models. It is a discriminative model that extends a standard maximum entropy classifier by assuming that the values to be learned are connected in a Markov chain rather than being conditionally independent of each other. It represents the probability of reaching a state given an observation and the previous state. Sung al @cite_13 implemented detection and recognition of unstructured human activity in unstructured environments using hierarchical MEMM . Sung al use different features describing body pose, hand position and motion extracted from a skelet al representation of the observed person. | {
"cite_N": [
"@cite_14",
"@cite_13"
],
"mid": [
"1934019294",
"2017695267"
],
"abstract": [
"Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many text-related tasks, such as part-of-speech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial distributions over a discrete vocabulary, and the HMM parameters are set to maximize the likelihood of the observations. This paper presents a new Markovian sequence model, closely related to HMMs, that allows observations to be represented as arbitrary overlapping features (such as word, capitalization, formatting, part-of-speech), and defines the conditional probability of state sequences given observation sequences. It does this by using the maximum entropy framework to fit a set of exponential models that represent the probability of a state given an observation and the previous state. We present positive experimental results on the segmentation of FAQ’s.",
"Being able to detect and recognize human activities is essential for several applications, including personal assistive robotics. In this paper, we perform detection and recognition of unstructured human activity in unstructured environments. We use a RGBD sensor (Microsoft Kinect) as the input sensor, and compute a set of features based on human pose and motion, as well as based on image and point-cloud information. Our algorithm is based on a hierarchical maximum entropy Markov model (MEMM), which considers a person's activity as composed of a set of sub-activities. We infer the two-layered graph structure using a dynamic programming approach. We test our algorithm on detecting and recognizing twelve different activities performed by four people in different environments, such as a kitchen, a living room, an office, etc., and achieve good performance even when the person was not seen before in the training set.1"
]
} |
1510.05344 | 2333751995 | This work presents an efficient method to solve a class of continuous-time, continuous-space stochastic optimal control problems of robot motion in a cluttered environment. The method builds upon a path integral representation of the stochastic optimal control problem that allows computation of the optimal solution through sampling and estimation process. As this sampling process often leads to a local minimum especially when the state space is highly non-convex due to the obstacle field, we present an efficient method to alleviate this issue by devising a proposed topological motion planning algorithm. Combined with a receding-horizon scheme in execution of the optimal control solution, the proposed method can generate a dynamically feasible and collision-free trajectory while reducing concern about local optima. Illustrative numerical examples are presented to demonstrate the applicability and validity of the proposed approach. | The proposed method in this work takes advantage of the architecture in @cite_20 : it first expands a goal-rooted backward tree over a topology-embedded state space without considering uncertainty in the dynamics. Then, instead of directly using the tree as a feedback policy, the control policy for LSCO is computed using the path integral control method. | {
"cite_N": [
"@cite_20"
],
"mid": [
"1527181952"
],
"abstract": [
"The RRT* algorithm has efficiently extended Rapidly-exploring Random Trees (RRTs) to endow it with asymptotic optimality. We propose Goal-Rooted Feedback Motion Trees (GR-FMTs) that honor state input constraints and generate collision-free feedback policies. Given analytic solutions for optimal local steering, GR-FMTs obtain and realize safe, dynamically feasible, and asymptotically optimal trajectories toward goals. Second, for controllable linear systems with linear state input constraints, we propose a fast method for local steering, based on polynomial basis functions and segmentation. GR-FMTs with the method obtain and realize trajectories that are collision-free, dynamically feasible under constraints, and asymptotically optimal within a set we define. The formulation includes linear or quadratic programming of small sizes, where constraints are identified by root-finding in low or medium order of polynomials and added progressively."
]
} |
1510.05058 | 2223687698 | Analysis of opinion dynamics in social networks plays an important role in today's life. For applications such as predicting users' political preference, it is particularly important to be able to analyze the dynamics of competing opinions. While observing the evolution of polar opinions of a social network's users over time, can we tell when the network "behaved" abnormally? Furthermore, can we predict how the opinions of the users will change in the future? Do opinions evolve according to existing network opinion dynamics models? To answer such questions, it is not sufficient to study individual user behavior, since opinions can spread far beyond users' egonets. We need a method to analyze opinion dynamics of all network users simultaneously and capture the effect of individuals' behavior on the global evolution pattern of the social network. In this work, we introduce Social Network Distance (SND) - a distance measure that quantifies the "cost" of evolution of one snapshot of a social network into another snapshot under various models of polar opinion propagation. SND has a rich semantics of a transportation problem, yet, is computable in time linear in the number of users, which makes SND applicable to the analysis of large-scale online social networks. In our experiments with synthetic and real-world Twitter data, we demonstrate the utility of our distance measure for anomalous event detection. It achieves a true positive rate of 0.83, twice as high as that of alternatives. When employed for opinion prediction in Twitter, our method's accuracy is 75.63 , which is 7.5 higher than that of the next best method. Source Code: this https URL | There is a large number of existing distance measures used in vector spaces, including @math , Hamming, Canberra, Cosine, Kullback-Leibler, and Quadratic Form @cite_21 distances. However, none of them is adequate for the comparison of network states, since these distance measures either compare vectors coordinate-wise, thereby, not capturing the interaction between users in the network, or in the case of Quadratic Form distance, capture the user interaction in a very limited and uninterpretable way. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2118783153"
],
"abstract": [
"In image retrieval based on color, the weighted distance between color histograms of two images, represented as a quadratic form, may be defined as a match measure. However, this distance measure is computationally expensive and it operates on high dimensional features (O(N)). We propose the use of low-dimensional, simple to compute distance measures between the color distributions, and show that these are lower bounds on the histogram distance measure. Results on color histogram matching in large image databases show that prefiltering with the simpler distance measures leads to significantly less time complexity because the quadratic histogram distance is now computed on a smaller set of images. The low-dimensional distance measure can also be used for indexing into the database. >"
]
} |
1510.05058 | 2223687698 | Analysis of opinion dynamics in social networks plays an important role in today's life. For applications such as predicting users' political preference, it is particularly important to be able to analyze the dynamics of competing opinions. While observing the evolution of polar opinions of a social network's users over time, can we tell when the network "behaved" abnormally? Furthermore, can we predict how the opinions of the users will change in the future? Do opinions evolve according to existing network opinion dynamics models? To answer such questions, it is not sufficient to study individual user behavior, since opinions can spread far beyond users' egonets. We need a method to analyze opinion dynamics of all network users simultaneously and capture the effect of individuals' behavior on the global evolution pattern of the social network. In this work, we introduce Social Network Distance (SND) - a distance measure that quantifies the "cost" of evolution of one snapshot of a social network into another snapshot under various models of polar opinion propagation. SND has a rich semantics of a transportation problem, yet, is computable in time linear in the number of users, which makes SND applicable to the analysis of large-scale online social networks. In our experiments with synthetic and real-world Twitter data, we demonstrate the utility of our distance measure for anomalous event detection. It achieves a true positive rate of 0.83, twice as high as that of alternatives. When employed for opinion prediction in Twitter, our method's accuracy is 75.63 , which is 7.5 higher than that of the next best method. Source Code: this https URL | Existing graph-oriented distance measures are also unsuitable for comparing network states with polar opinions. The first class of such distance measures is graph isomorphism-based distance measures, such as @cite_15 . These distance measures are node state-oblivious, and, hence are not applicable to the comparison of network states. Another class of graph distance measures is measures @cite_22 that define the distance between two networks as the cost of the optimal sequence of edit operations, such as node or edge insertion, deletion, or substitution, transforming one network into another. GED can be node state-aware, but its value is not interpretable from the opinion dynamics point of view, and even its approximate computation takes time cubic in @math . | {
"cite_N": [
"@cite_15",
"@cite_22"
],
"mid": [
"2012459404",
"1983681808"
],
"abstract": [
"Abstract Error-tolerant graph matching is a powerful concept that has various applications in pattern recognition and machine vision. In the present paper, a new distance measure on graphs is proposed. It is based on the maximal common subgraph of two graphs. The new measure is superior to edit distance based measures in that no particular edit operations together with their costs need to be defined. It is formally shown that the new distance measure is a metric. Potential algorithms for the efficient computation of the new measure are discussed.",
"Inexact graph matching has been one of the significant research foci in the area of pattern analysis. As an important way to measure the similarity between pairwise graphs error-tolerantly, graph edit distance (GED) is the base of inexact graph matching. The research advance of GED is surveyed in order to provide a review of the existing literatures and offer some insights into the studies of GED. Since graphs may be attributed or non-attributed and the definition of costs for edit operations is various, the existing GED algorithms are categorized according to these two factors and described in detail. After these algorithms are analyzed and their limitations are identified, several promising directions for further research are proposed."
]
} |
1510.05058 | 2223687698 | Analysis of opinion dynamics in social networks plays an important role in today's life. For applications such as predicting users' political preference, it is particularly important to be able to analyze the dynamics of competing opinions. While observing the evolution of polar opinions of a social network's users over time, can we tell when the network "behaved" abnormally? Furthermore, can we predict how the opinions of the users will change in the future? Do opinions evolve according to existing network opinion dynamics models? To answer such questions, it is not sufficient to study individual user behavior, since opinions can spread far beyond users' egonets. We need a method to analyze opinion dynamics of all network users simultaneously and capture the effect of individuals' behavior on the global evolution pattern of the social network. In this work, we introduce Social Network Distance (SND) - a distance measure that quantifies the "cost" of evolution of one snapshot of a social network into another snapshot under various models of polar opinion propagation. SND has a rich semantics of a transportation problem, yet, is computable in time linear in the number of users, which makes SND applicable to the analysis of large-scale online social networks. In our experiments with synthetic and real-world Twitter data, we demonstrate the utility of our distance measure for anomalous event detection. It achieves a true positive rate of 0.83, twice as high as that of alternatives. When employed for opinion prediction in Twitter, our method's accuracy is 75.63 , which is 7.5 higher than that of the next best method. Source Code: this https URL | A third class of distance measures includes @cite_14 @cite_29 @cite_17 , which express similarity of the nodes of two networks recursively, use a fix-point iteration to compute node similarities, and, then, aggregate node similarities to obtain the similarity of two networks. Iterative distance measures share the problem of GED -- they do not capture the way opinions spread in the network. | {
"cite_N": [
"@cite_29",
"@cite_14",
"@cite_17"
],
"mid": [
"",
"2113367658",
"2156543375"
],
"abstract": [
"",
"We introduce a concept of similarity between vertices of directed graphs. Let GA and GB be two directed graphs with, respectively, nA and nB vertices. We define an nB nA similarity matrix S whose real entry sij expresses how similar vertex j (in GA) is to vertex i (in GB): we say that sij is their similarity score. The similarity matrix can be obtained as the limit of the normalized even iterates of Sk+1 = BSkAT + BTSkA, where A and B are adjacency matrices of the graphs and S0 is a matrix whose entries are all equal to 1. In the special case where GA = GB = G, the matrix S is square and the score sij is the similarity score between the vertices i and j of G. We point out that Kleinberg's \"hub and authority\" method to identify web-pages relevant to a given query can be viewed as a special case of our definition in the case where one of the graphs has two vertices and a unique directed edge between them. In analogy to Kleinberg, we show that our similarity scores are given by the components of a dominant eigenvector of a nonnegative matrix. Potential applications of our similarity concept are numerous. We illustrate an application for the automatic extraction of synonyms in a monolingual dictionary.",
"Matching elements of two data schemas or two data instances plays a key role in data warehousing, e-business, or even biochemical applications. In this paper we present a matching algorithm based on a fixpoint computation that is usable across different scenarios. The algorithm takes two graphs (schemas, catalogs, or other data structures) as input, and produces as output a mapping between corresponding nodes of the graphs. Depending on the matching goal, a subset of the mapping is chosen using filters. After our algorithm runs, we expect a human to check and if necessary adjust the results. As a matter of fact, we evaluate the 'accuracy' of the algorithm by counting the number of needed adjustments. We conducted a user study, in which our accuracy metric was used to estimate the labor savings that the users could obtain by utilizing our algorithm to obtain an initial matching. Finally, we illustrate how our matching algorithm is deployed as one of several high-level operators in an implemented testbed for managing information models and mappings."
]
} |
1510.04802 | 2273875325 | To support people trying to lose weight and stay healthy, more and more fitness apps have sprung up including the ability to track both calories intake and expenditure. Users of such apps are part of a wider quantified self'' movement and many opt-in to publicly share their logged data. In this paper, we use public food diaries of more than 4,000 long-term active MyFitnessPal users to study the characteristics of a (un-)successful diet. Concretely, we train a machine learning model to predict repeatedly being over or under self-set daily calories goals and then look at which features contribute to the model's prediction. Our findings include both expected results, such as the token mcdonalds'' or the category dessert'' being indicative for being over the calories goal, but also less obvious ones such as the difference between pork and poultry concerning dieting success, or the use of the quick added calories'' functionality being indicative of over-shooting calorie-wise. This study also hints at the feasibility of using such data for more in-depth data mining, e.g., looking at the interaction between consumed foods such as mixing protein- and carbohydrate-rich foods. To the best of our knowledge, this is the first systematic study of public food diaries. | When it comes to weight loss and weight maintenance, certain practices related to the quantified self such as think about how much progress you've made'' (by having charts showing this), weigh yourself'' (through weight logging) and read nutrition labels'' (making use of the nutrition database that services such as MFP provide) have been among the few indicators of both successful weight loss and maintenance @cite_8 . Other research has also found that self monitoring'' is important both for weight loss @cite_2 and for successfully maintaining such loss @cite_1 . These studies are, however, from the pre-smartphone era'' where calorie counting was done manually using pencil and paper. Still, they give credence to the potential benefits for both weight loss and weight management that apps such as MFP could offer. A currently ongoing study will also shed light on the effect of frequent weight control @cite_10 on dieting outcomes. To date, few studies have, however, looked at the effectiveness of mobile apps for motivating health behavior changes @cite_0 . | {
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_0",
"@cite_2",
"@cite_10"
],
"mid": [
"2060891765",
"2056874715",
"2047097182",
"2035040039",
"2085186147"
],
"abstract": [
"Background: Few studies have examined the weight-control practices that promote weight loss and weight-loss maintenance in the same sample. Purpose: To examine whether the weight control practices associated with weight loss differ from those associated with weight-loss maintenance. Methods: Cross-sectional survey of a random sample of 1165 U.S. adults. The adjusted associations of the use of 36 weight-control practices in the past week with success in weight loss (10 lost in the past year) and success in weight-loss maintenance (10 lost and maintained for 1 year) were examined. Results: Of the 36 practices, only 8 (22 ) were associated with both weight loss and weight-loss maintenance. Overall, there was poor agreement (kappa0.22) between the practices associated with weight loss and or weight-loss maintenance. For example, those who reported more often following a consistent exercise routine or eating plenty of low-fat sources of protein were 1.97 (95 CI1.33, 2.94) and 1.76 (95 CI1.25, 2.50) times more likely, respectively, to report weight-loss maintenance but not weight loss. Alternatively, those who reported more often doing different kinds of exercises or planning meals ahead of time were 2.56 (95 CI1.44, 4.55) and 1.68 (95 CI1.03, 2.74) times more likely, respectively, to report weight loss but not weight-loss maintenance. Conclusions: Successful weight loss and weight-loss maintenance may require two different sets of practices. Designing interventions with this premise may inform the design of more effective weight-loss maintenance interventions.",
"Context Although the rise in overweight and obesity in the United States is well documented, long-term weight loss maintenance (LTWLM) has been minimally explored.",
"Background: Several thousand mobile phone apps are available to download to mobile phones for health and fitness. Mobile phones may provide a unique means of administering health interventions to populations. Objective: The purpose of this systematic review was to systematically search and describe the literature on mobile apps used in health behavior interventions, describe the behavioral features and focus of health apps, and to evaluate the potential of apps to disseminate health behavior interventions. Methods: We conducted a review of the literature in September 2014 using key search terms in several relevant scientific journal databases. Only English articles pertaining to health interventions using mobile phone apps were included in the final sample. Results: The 24 studies identified for this review were primarily feasibility and pilot studies of mobile apps with small sample sizes. All studies were informed by behavioral theories or strategies, with self-monitoring as the most common construct. Acceptability of mobile phone apps was high among mobile phone users. Conclusions: The lack of large sample studies using mobile phone apps may signal a need for additional studies on the potential use of mobile apps to assist individuals in changing their health behaviors. Of these studies, there is early evidence that apps are well received by users. Based on available research, mobile apps may be considered a feasible and acceptable means of administering health interventions, but a greater number of studies and more rigorous research and evaluations are needed to determine efficacy and establish evidence for best practices. [JMIR Mhealth Uhealth 2015;3(1):e20]",
"Abstract Self-monitoring is the centerpiece of behavioral weight loss intervention programs. This article presents a systematic review of the literature on three components of self-monitoring in behavioral weight loss studies: diet, exercise, and self-weighing. This review included articles that were published between 1993 and 2009 that reported on the relationship between weight loss and these self-monitoring strategies. Of the 22 studies identified, 15 focused on dietary self-monitoring, one on self-monitoring exercise, and six on self-weighing. A wide array of methods was used to perform self-monitoring; the paper diary was used most often. Adherence to self-monitoring was reported most frequently as the number of diaries completed or the frequency of log-ins or reported weights. The use of technology, which included the Internet, personal digital assistants, and electronic digital scales were reported in five studies. Descriptive designs were used in the earlier studies whereas more recent reports involved prospective studies and randomized trials that examined the effect of self-monitoring on weight loss. A significant association between self-monitoring and weight loss was consistently found; however, the level of evidence was weak because of methodologic limitations. The most significant limitations of the reviewed studies were the homogenous samples and reliance on self-report. In all but two studies, the samples were predominantly white and women. This review highlights the need for studies in more diverse populations, for objective measures of adherence to self-monitoring, and for studies that establish the required dose of self-monitoring for successful outcomes.",
"Observational evidence from behavioral weight control trials and community studies suggests that greater frequency of weighing oneself, or tracking weight, is associated with better weight outcomes. Conversely, it has also been suggested that frequent weight tracking may have a negative impact on mental health and outcomes during weight loss, but there are minimal experimental data that address this concern in the context of an active weight loss program. To achieve the long-term goal of strengthening behavioral weight loss programs, the purpose of this randomized controlled trial (the Tracking Study) is to test variations on frequency of self-weighing during a behavioral weight loss program, and to examine psychosocial and mental health correlates of weight tracking and weight loss outcomes. Three hundred thirty-nine overweight and obese adults were recruited and randomized to one of three variations on weight tracking frequency during a 12-month weight loss program with a 12-month follow-up: daily weight tracking, weekly weight tracking, or no weight tracking. The primary outcome is weight in kilograms at 24 months. The weight loss program integrates each weight tracking instruction with standard behavioral weight loss techniques (goal setting, self-monitoring, stimulus control, dietary and physical activity enhancements, lifestyle modifications); participants in weight tracking conditions were provided with wireless Internet technology (Wi-Fi-enabled digital scales and touchscreen personal devices) to facilitate weight tracking during the study. This paper describes the study design, intervention features, recruitment, and baseline characteristics of participants enrolled in the Tracking Study."
]
} |
1510.04802 | 2273875325 | To support people trying to lose weight and stay healthy, more and more fitness apps have sprung up including the ability to track both calories intake and expenditure. Users of such apps are part of a wider quantified self'' movement and many opt-in to publicly share their logged data. In this paper, we use public food diaries of more than 4,000 long-term active MyFitnessPal users to study the characteristics of a (un-)successful diet. Concretely, we train a machine learning model to predict repeatedly being over or under self-set daily calories goals and then look at which features contribute to the model's prediction. Our findings include both expected results, such as the token mcdonalds'' or the category dessert'' being indicative for being over the calories goal, but also less obvious ones such as the difference between pork and poultry concerning dieting success, or the use of the quick added calories'' functionality being indicative of over-shooting calorie-wise. This study also hints at the feasibility of using such data for more in-depth data mining, e.g., looking at the interaction between consumed foods such as mixing protein- and carbohydrate-rich foods. To the best of our knowledge, this is the first systematic study of public food diaries. | Concerning the analysis of food consumption through user-generated data, some studies have taken a public health approach and looked at regional or temporal patterns in food consumption. West al @cite_4 used web search and click-through data to study seasonal changes in the recipes people search for. Server logs from a recipe site were used in @cite_15 @cite_6 to study regional differences in food preferences. Their angle is less health-centric and more culture- or preference-centric with a focus on whether ingredients or regional influences dominate the recipe choice. Using the ingredients of recipes, rather than regional or cultural knowledge, has been explored in @cite_11 for the purpose of recipe recommendation. | {
"cite_N": [
"@cite_15",
"@cite_4",
"@cite_6",
"@cite_11"
],
"mid": [
"54088320",
"2953044842",
"2131440803",
"2950291079"
],
"abstract": [
"Since food is one of the central elements of all human beings, a high interest exists in exploring temporal and spatial food and dietary patterns of humans. Predominantly, data for such investigations stem from consumer panels which continuously capture food consumption patterns from individuals and households. In this work we leverage data from a large online recipe platform which is frequently used in the German speaking regions in Europe and explore (i) the association between geographic proximity and shared food preferences and (ii) to what extent temporal information helps to predict the food preferences of users. Our results reveal that online food preferences of geographically closer regions are more similar than those of distant ones and show that specific types of ingredients are more popular on specific days of the week. The observed patterns can successfully be mapped to known real-world patterns which suggests that existing methods for the investigation of dietary and food patterns (e.g., consumer panels) may benefit from incorporating the vast amount of data generated by users browsing recipes on the Web.",
"Nutrition is a key factor in people's overall health. Hence, understanding the nature and dynamics of population-wide dietary preferences over time and space can be valuable in public health. To date, studies have leveraged small samples of participants via food intake logs or treatment data. We propose a complementary source of population data on nutrition obtained via Web logs. Our main contribution is a spatiotemporal analysis of population-wide dietary preferences through the lens of logs gathered by a widely distributed Web-browser add-on, using the access volume of recipes that users seek via search as a proxy for actual food consumption. We discover that variation in dietary preferences as expressed via recipe access has two main periodic components, one yearly and the other weekly, and that there exist characteristic regional differences in terms of diet within the United States. In a second study, we identify users who show evidence of having made an acute decision to lose weight. We characterize the shifts in interests that they express in their search queries and focus on changes in their recipe queries in particular. Last, we correlate nutritional time series obtained from recipe queries with time-aligned data on hospital admissions, aimed at understanding how behavioral data captured in Web logs might be harnessed to identify potential relationships between diet and acute health problems. In this preliminary study, we focus on patterns of sodium identified in recipes over time and patterns of admission for congestive heart failure, a chronic illness that can be exacerbated by increases in sodium intake.",
"Food is a central element of humans’ life, and food preferences are amongst others manifestations of social, cultural and economic forces that influence the way we view, prepare and consume food. Historically, data for studies of food preferences stems from consumer panels which continuously capture food consumption and preference patterns from individuals and households. In this work we look at a new source of data, i.e., server log data from a large recipe platform on the World Wide Web, and explore its usefulness for understanding online food preferences. The main findings of this work are: (i) recipe preferences are partly driven by ingredients, (ii) recipe preference distributions exhibit more regional differences than ingredient preference distributions, and (iii) weekday preferences are clearly distinct from weekend preferences.",
"The recording and sharing of cooking recipes, a human activity dating back thousands of years, naturally became an early and prominent social use of the web. The resulting online recipe collections are repositories of ingredient combinations and cooking methods whose large-scale and variety yield interesting insights about both the fundamentals of cooking and user preferences. At the level of an individual ingredient we measure whether it tends to be essential or can be dropped or added, and whether its quantity can be modified. We also construct two types of networks to capture the relationships between ingredients. The complement network captures which ingredients tend to co-occur frequently, and is composed of two large communities: one savory, the other sweet. The substitute network, derived from user-generated suggestions for modifications, can be decomposed into many communities of functionally equivalent ingredients, and captures users' preference for healthier variants of a recipe. Our experiments reveal that recipe ratings can be well predicted with features derived from combinations of ingredient networks and nutrition information."
]
} |
1510.04802 | 2273875325 | To support people trying to lose weight and stay healthy, more and more fitness apps have sprung up including the ability to track both calories intake and expenditure. Users of such apps are part of a wider quantified self'' movement and many opt-in to publicly share their logged data. In this paper, we use public food diaries of more than 4,000 long-term active MyFitnessPal users to study the characteristics of a (un-)successful diet. Concretely, we train a machine learning model to predict repeatedly being over or under self-set daily calories goals and then look at which features contribute to the model's prediction. Our findings include both expected results, such as the token mcdonalds'' or the category dessert'' being indicative for being over the calories goal, but also less obvious ones such as the difference between pork and poultry concerning dieting success, or the use of the quick added calories'' functionality being indicative of over-shooting calorie-wise. This study also hints at the feasibility of using such data for more in-depth data mining, e.g., looking at the interaction between consumed foods such as mixing protein- and carbohydrate-rich foods. To the best of our knowledge, this is the first systematic study of public food diaries. | Conceptually close to our study is the work by Abbar al @cite_5 who look at food mentions on Twitter. Their analysis also includes bit on and not only public health. If users were to mention they ate on social media then the type of food diaries we are using would become redundant. However, we believe that this is unlikely to be the case for many users and our data is far cleaner and more structured and comes with user-defined daily calorie goals. Culotta @cite_18 also used Twitter data to study obesity and other health issues but from a purely aggregate, public health perspective. Kuener al @cite_13 use data from Yahoo Answers to look at issues of both mental and physical health of obese people. They find that obese people residing in counties with higher levels of BMI may have better physical and mental health than obese people living in counties with lower levels of BMI''. They do not, however, look at indicators related to the success or failure of weight loss. | {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_13"
],
"mid": [
"2001488574",
"2104925568",
"2152270068"
],
"abstract": [
"Food is an integral part of our lives, cultures, and well-being, and is of major interest to public health. The collection of daily nutritional data involves keeping detailed diaries or periodic surveys and is limited in scope and reach. Alternatively, social media is infamous for allowing its users to update the world on the minutiae of their daily lives, including their eating habits. In this work we examine the potential of Twitter to provide insight into US-wide dietary choices by linking the tweeted dining experiences of 210K users to their interests, demographics, and social networks. We validate our approach by relating the caloric values of the foods mentioned in the tweets to the state-wide obesity rates, achieving a Pearson correlation of 0.77 across the 50 US states and the District of Columbia. We then build a model to predict county-wide obesity and diabetes statistics based on a combination of demographic variables and food names mentioned on Twitter. Our results show significant improvement over previous CHI research (Culotta 2014). We further link this data to societ al and economic factors, such as education and income, illustrating that areas with higher education levels tweet about food that is significantly less caloric. Finally, we address the somewhat controversial issue of the social nature of obesity (Christakis & Fowler 2007) by inducing two social networks using mentions and reciprocal following relationships.",
"Understanding the relationships among environment, behavior, and health is a core concern of public health researchers. While a number of recent studies have investigated the use of social media to track infectious diseases such as influenza, little work has been done to determine if other health concerns can be inferred. In this paper, we present a large-scale study of 27 health-related statistics, including obesity, health insurance coverage, access to healthy foods, and teen birth rates. We perform a linguistic analysis of the Twitter activity in the top 100 most populous counties in the U.S., and find a significant correlation with 6 of the 27 health statistics. When compared to traditional models based on demographic variables alone, we find that augmenting models with Twitter-derived information improves predictive accuracy for 20 of 27 statistics, suggesting that this new methodology can complement existing approaches.",
"Using a large social media database, Yahoo Answers, we explored postings to an online forum in which posters asked whether their height and weight qualify themselves as “skinny,” “thin,” “fat,” or “obese” over time and across forum topics. We used these data to better understand whether a higher-than-average body mass index (BMI) in one’s county might, in some ways, be protective for one’s mental and physical health. For instance, we explored whether higher proportions of obese people in one’s county predicts lower levels of bullying or “am I fat?” questions from those with a normal BMI relative to his her actual BMI. Most women asking whether they were themselves fat obese were not actually fat obese. Both men and women who were actually overweight obese were significantly more likely in the future to ask for advice about bullying than thinner individuals. Moreover, as mean county-level BMI increased, bullying decreased and then increased again (in a U-shape curve). Regardless of where they lived, posters who asked “am I fat?” who had a BMI in the healthy range were more likely than other posters to subsequently post on health problems, but the proportions of such posters also declined greatly as county-level BMI increased. Our findings suggest that obese people residing in counties with higher levels of BMI may have better physical and mental health than obese people living in counties with lower levels of BMI by some measures, but these improvements are modest."
]
} |
1510.04811 | 2227677923 | Quantification is the task of estimating the class-distribution of a data-set. While typically considered as a parameter estimation problem with strict assumptions on the data-set shift, we consider quantification in-the-wild, on two large scale data-sets from marine ecology: a survey of Caribbean coral reefs, and a plankton time series from Martha's Vineyard Coastal Observatory. We investigate several quantification methods from the literature and indicate opportunities for future work. In particular, we show that a deep neural network can be fine-tuned on a very limited amount of data (25 - 100 samples) to outperform alternative methods. | For supervised quantification, simple random sampling @cite_4 can be utilized to achieve an unbiased estimate the class-distribution: @math where @math is a vector of randomly permuted indices and @math the annotation budget. Simple random sampling doesn't utilize the classifier, @math , but it can be incorporated using auxiliary sampling designs, through offset @cite_21 or ratio @cite_9 estimators. In these methods, some property (e.g. bias) of the classifier is estimated from the labeled subset, and then used to adjust the prediction on the whole target set. Other methods include adapting the classifier to operate in the target domain, typically achieved by modifying the internal feature representation of the classifier @cite_1 . | {
"cite_N": [
"@cite_1",
"@cite_9",
"@cite_21",
"@cite_4"
],
"mid": [
"2953226914",
"2077102525",
"2255503823",
""
],
"abstract": [
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.",
"Abstract This paper reports results from an empirical study of the ratio estimator for a finite population total. From each of six real populations, 1,000 simple random samples, 1,000 restricted random samples, and three nonrandom samples of size 32 are drawn. Performance of the ratio estimator and of five estimators of its variance is compared with theoretical results generated using (a) prediction (superpopulation) models and (b) probability sampling distributions. The results, presented graphically, show that theory based on prediction models can reveal relationships that are essential in making inferences, but that are concealed in probability sampling analyses.",
"Methods for automated collection and annotation are changing the cost-structures of sampling surveys for a wide range of applications. Digital samples in the form of images or audio recordings can be collected rapidly, and annotated by computer programs or crowd workers. We consider the problem of estimating a population mean under these new cost-structures, and propose a Hybrid-Offset sampling design. This design utilizes two annotators: a primary, which is accurate but costly (e.g. a human expert) and an auxiliary which is noisy but cheap (e.g. a computer program), in order to minimize total sampling expenditures. Our analysis gives necessary conditions for the Hybrid-Offset design and specifies optimal sample sizes for both annotators. Simulations on data from a coral reef survey program indicate that the Hybrid-Offset design outperforms several alternative sampling designs. In particular, sampling expenditures are reduced 50 compared to the Conventional design currently deployed by the coral ecologists.",
""
]
} |
1510.05229 | 2790207794 | We study monotonicity properties of solutions to the classic problem of fair cake-cutting—dividing a heterogeneous resource among agents with different preferences. Resource- and population-monotonicity relate to scenarios where the cake, or the number of participants who divide the cake, changes. It is required that the utility of all participants change in the same direction: either all of them are better-off (if there is more to share or fewer to share among) or all are worse-off (if there is less to share or more to share among). We formally introduce these concepts to the cake-cutting setting and show that they are violated by common division rules. In contrast, we prove that the Nash-optimal rule—maximizing the product of utilities—is resource-monotonic and population-monotonic, in addition to being Pareto-optimal, envy-free and satisfying a strong competitive-equilibrium condition. Moreover, we prove that it is the only rule among a natural family of welfare-maximizing rules that is both proportional and resource-monotonic. | Monotonicity issues have been extensively studied with respect to cooperative game theory , political representation , computer resource allocation and other fair division problems. Extensive reviews of monotonicity axioms can be found in chapters 3, 6 and 7 of and in chapter 7 of @cite_15 . To the best of our knowledge, the present paper is the first that studies these properties in a cake-cutting setting. | {
"cite_N": [
"@cite_15"
],
"mid": [
"1523015407"
],
"abstract": [
"We review the theory of fairness as it pertains to concretely specified problems of resource allocations. We present punctual notions designed to evaluate how well individuals, or groups, are treated in relation to one another: no-envy, egalitarian-equivalence, individual and collective lower or upper bounds on welfare, notions of equal or equivalent opportunities, as well as various families extending these notions. We also introduce relational notions specifying how allocation rules should respond to changes in resources (resource monotonicity), technologies (technology monotonicity), preferences (welfare domination under preference replacement), and population (population monotonicity, consistency, converse consistency). We investigate the implications of these properties, in various combinations, in the context of various models: the “classical” problem of dividing an unproduced bundle, economies with production, economies with public goods, economies with single-peaked preferences, economies with indivisible goods, and economies in which the dividend is a non-homogeneous continuum. We consider economies in which resources are owned collectively, economies in which they are owned privately, and economies in which ownership is mixed. We offer a number of characterizations of particular allocation rules."
]
} |
1510.05229 | 2790207794 | We study monotonicity properties of solutions to the classic problem of fair cake-cutting—dividing a heterogeneous resource among agents with different preferences. Resource- and population-monotonicity relate to scenarios where the cake, or the number of participants who divide the cake, changes. It is required that the utility of all participants change in the same direction: either all of them are better-off (if there is more to share or fewer to share among) or all are worse-off (if there is less to share or more to share among). We formally introduce these concepts to the cake-cutting setting and show that they are violated by common division rules. In contrast, we prove that the Nash-optimal rule—maximizing the product of utilities—is resource-monotonic and population-monotonic, in addition to being Pareto-optimal, envy-free and satisfying a strong competitive-equilibrium condition. Moreover, we prove that it is the only rule among a natural family of welfare-maximizing rules that is both proportional and resource-monotonic. | Experimental studies show that people value certain fairness criteria more than others. @cite_9 demonstrated that people are willing to sacrifice Pareto-efficiency in order to reach an envy-free allocation. To our knowledge no study was ever conducted to unfold the relationship between monotonicity and efficiency or proportionality. However some indirect evidence points toward that monotonicity of the solution is in some cases as important as proportionality. The so called , where electoral seats have to be distributed among administrative regions provides the most notorious examples. The seat distribution of the US House of Representatives generated many monotonicity related anomalies in the last two centuries. The famous Alabama-paradox, as well as the later discovered population and new state paradoxes pressed the legislators to adopt newer and newer apportionment rules. The currently used seat distribution method is free from such anomalies, however it does not satisfy the so called Hare-quota, a basic guarantee of proportionality . We view this as an evidence that monotonicity is as important fairness axiom as the classic axioms of proportionality and envy-freeness. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2094747375"
],
"abstract": [
"Envy is sometimes suggested as an underlying motive in the assessment of different economic allocations. In the theoretical literature on fair division, following Foley [Foley, D. (1967), Yale Economic Essays, 7, 45–98], the term “envy” refers to an intrapersonal comparison of different consumption bundles. By contrast, in its everyday use “envy” involves interpersonal comparisons of well-being. We present, discuss results from free-form bargaining experiments on fair division problems in which inter-and intrapersonal criteria can be distinguished. We find that interpersonal comparisons play the dominant role. The effect of the intrapersonal criterion of envy freeness is limited to situations in which other fairness criteria are not applicable."
]
} |
1510.05229 | 2790207794 | We study monotonicity properties of solutions to the classic problem of fair cake-cutting—dividing a heterogeneous resource among agents with different preferences. Resource- and population-monotonicity relate to scenarios where the cake, or the number of participants who divide the cake, changes. It is required that the utility of all participants change in the same direction: either all of them are better-off (if there is more to share or fewer to share among) or all are worse-off (if there is less to share or more to share among). We formally introduce these concepts to the cake-cutting setting and show that they are violated by common division rules. In contrast, we prove that the Nash-optimal rule—maximizing the product of utilities—is resource-monotonic and population-monotonic, in addition to being Pareto-optimal, envy-free and satisfying a strong competitive-equilibrium condition. Moreover, we prove that it is the only rule among a natural family of welfare-maximizing rules that is both proportional and resource-monotonic. | @cite_12 defines the , which requires that, whenever any change happens in the environment, the welfare of all agents not responsible for the change should be affected in the same direction --- they should all be made at least as well off as they were initially or they should all be made at most as well off. This is the most general way of expressing the idea of among agents. The PM and RM axioms are special cases of this principle. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2027956524"
],
"abstract": [
"Our objective is to investigate the implications of the “replacement principle” for the fair allocation of an infinitely divisible commodity among agents with single-peaked preferences. The principle says that when one of the components of the data entering the description of the problem to be solved changes, all of the relevant agents should be affected in the same direction: they all gain or they all lose. We apply it to situations in which the preferences of one of the agents may change, under the name ofwelfare-domination under preference-replacement. We show that there is no selection from the no-envy and Pareto solution satisfying it. Then, we weaken it by limiting its application to situations in which the change is not so disruptive that it turns the economy from one in which there is “too little” of the commodity to one in which there is “too much,” or conversely. We show that there is only one selection from the no-envy and Pareto solution satisfying this property andreplication-invariance. It is the “uniform rule,” a solution that has played a central role in previous analyses of the problem.Journal of Economic LiteratureClassification Numbers: D51, D63, D70."
]
} |
1510.05229 | 2790207794 | We study monotonicity properties of solutions to the classic problem of fair cake-cutting—dividing a heterogeneous resource among agents with different preferences. Resource- and population-monotonicity relate to scenarios where the cake, or the number of participants who divide the cake, changes. It is required that the utility of all participants change in the same direction: either all of them are better-off (if there is more to share or fewer to share among) or all are worse-off (if there is less to share or more to share among). We formally introduce these concepts to the cake-cutting setting and show that they are violated by common division rules. In contrast, we prove that the Nash-optimal rule—maximizing the product of utilities—is resource-monotonic and population-monotonic, in addition to being Pareto-optimal, envy-free and satisfying a strong competitive-equilibrium condition. Moreover, we prove that it is the only rule among a natural family of welfare-maximizing rules that is both proportional and resource-monotonic. | The consistency axiom (cf. @cite_20 or @cite_2 ) is related to population-monotonicity, but it is fundamentally different as in that case the leaving agents take their fair shares with them. | {
"cite_N": [
"@cite_20",
"@cite_2"
],
"mid": [
"2094005555",
"2116299292"
],
"abstract": [
"A basic principle of distributive justice states that if an allocation among a group of individuals is fair, then it should be perceived as fair when restricted to each subgroup of individuals. This 'consistency' principle applies in particular to methods for allocating taxes among citizens according to their ability to pay. Every consistent, continuous taxation method optimizes an additively separable objective function. Optimal allocations may be computed by a simple Lagrange multiplier technique. Similar results hold for allocating assets among creditors according to their claims. The techniques are illustrated by constructing an objective function for a bankruptcy method from the Babylonian Talmud.",
"An allocation rule is ‘consistent’ if the recommendation it makes for each problem ‘agrees’ with the recommendation it makes for each associated reduced problem, obtained by imagining some agents leaving with their assignments. Some authors have described the consistency principle as a ‘fairness principle’. Others have written that it is not about fairness, that it should be seen as an ‘operational principle’. We dispute the particular fairness interpretations that have been offered for consistency, but develop a different and important fairness foundation for the principle, arguing that it can be seen as the result of adding ‘some’ efficiency to a ‘post-application’ and efficiency-free expression of solidarity in response to population changes. We also challenge the interpretations of consistency as an operational principle that have been given, and here identify a sense in which such an interpretation can be supported. We review and assess the other interpretations of the principle, as ‘robustness’, ‘coherence’ and ‘reinforcement’."
]
} |
1510.05229 | 2790207794 | We study monotonicity properties of solutions to the classic problem of fair cake-cutting—dividing a heterogeneous resource among agents with different preferences. Resource- and population-monotonicity relate to scenarios where the cake, or the number of participants who divide the cake, changes. It is required that the utility of all participants change in the same direction: either all of them are better-off (if there is more to share or fewer to share among) or all are worse-off (if there is less to share or more to share among). We formally introduce these concepts to the cake-cutting setting and show that they are violated by common division rules. In contrast, we prove that the Nash-optimal rule—maximizing the product of utilities—is resource-monotonic and population-monotonic, in addition to being Pareto-optimal, envy-free and satisfying a strong competitive-equilibrium condition. Moreover, we prove that it is the only rule among a natural family of welfare-maximizing rules that is both proportional and resource-monotonic. | @cite_19 studies a related cake-cutting axiom called "division independence": if the cake is divided into sub-plots and each sub-plot is divided according to a rule, then the outcome should be identical to dividing the original cake using the same rule. He proves that the only rule which satisfies Pareto-optimality and division independence is the utilitarian-optimal rule --- the rule which maximizes the sum of the agents' utilities. Unfortunately, this rule does not satisfy the fairness axioms of proportionality and envy-freeness. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2012400407"
],
"abstract": [
"This paper studies the classical land division problem formalized by Steinhaus (Econometrica 16 (1948) 101–104) in a multi-profile context. We propose a notion of an allocation rule for this setting. We discuss several examples of rules and properties they may satisfy. Central among these properties is division independence: a parcel may be partitioned into smaller parcels, these smaller parcels allocated according to the rule, leaving a recommended allocation for the original parcel. In conjunction with two other normative properties, division independence is shown to imply the principle of utilitarianism."
]
} |
1510.05229 | 2790207794 | We study monotonicity properties of solutions to the classic problem of fair cake-cutting—dividing a heterogeneous resource among agents with different preferences. Resource- and population-monotonicity relate to scenarios where the cake, or the number of participants who divide the cake, changes. It is required that the utility of all participants change in the same direction: either all of them are better-off (if there is more to share or fewer to share among) or all are worse-off (if there is less to share or more to share among). We formally introduce these concepts to the cake-cutting setting and show that they are violated by common division rules. In contrast, we prove that the Nash-optimal rule—maximizing the product of utilities—is resource-monotonic and population-monotonic, in addition to being Pareto-optimal, envy-free and satisfying a strong competitive-equilibrium condition. Moreover, we prove that it is the only rule among a natural family of welfare-maximizing rules that is both proportional and resource-monotonic. | @cite_17 studies the problem of "online cake-cutting", in which agents arrive and depart during the process of dividing the cake. He shows how to adapt classic procedures like cut-and-choose and the Dubins-Spanier in order to satisfy online variants of the fairness axioms. Monotonicity properties are not studied, although the problem is similar in spirit to the concept of population-monotonicity. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2139264436"
],
"abstract": [
"We propose an online form of the cake cutting problem. This models situations where agents arrive and depart during the process of dividing a resource. We show that well known fair division procedures like cut-and-choose and the Dubins-Spanier moving knife procedure can be adapted to apply to such online problems. We propose some fairness properties that online cake cutting procedures can possess like online forms of proportionality and envy-freeness. We also consider the impact of collusion between agents. Finally, we study theoretically and empirically the competitive ratio of these online cake cutting procedures. Based on its resistance to collusion, and its good performance in practice, our results favour the online version of the cut-and-choose procedure over the online version of the moving knife procedure."
]
} |
1510.04820 | 2220346341 | We consider the over an error-free broadcast network in which a source generates a set of messages and there are multiple receivers, each holding a set of functions of source messages in its cache, called the , and demands to know another set of functions of messages, called the . Cognizant of the receivers' , the source aims to satisfy the demands of each receiver by making coded transmissions, called a . The objective is to minimize the number of such transmissions required. The restriction a receiver's demands pose on the code is represented via a constraint called the and obtain a code using the constructed using these constraints. Bounds on the size of an optimal code based on the parameters of the confusion graph are presented. Next, we consider the case of erroneous transmissions and provide a necessary and sufficient condition that an FIC must satisfy for correct decoding of desired functions at each receiver and obtain a lower bound on the length of an error-correcting FIC. | The functional source coding with side-information problem (FSCSIP), wherein the receiver wishes to compute a function, @math , of its side information random variable, @math , and the source random variable, @math , was studied in @cite_22 using the characteristic graph of the problem instance. An optimal vertex coloring of the characteristic graph obtained from the problem instance was shown to provide a minimum size code in @cite_29 . The extension of this problem to multiple receiver case was subsequently dealt in @cite_24 , wherein each receiver possessed multiple random variables correlated to source as side information and demanded several functions of source and their side information. In @cite_9 , we proposed and studied a variant of the FSCSIP wherein the receiver demands and holds as side information functions of source messages. | {
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_29",
"@cite_22"
],
"mid": [
"2021889595",
"1508189106",
"2535417292",
"2164651000"
],
"abstract": [
"In this paper, we consider the problem of multifunctional compression with side information. The problem is how we can compress a source X so that the receiver is able to compute some deterministic functions f1(X, Y1), ..., fm(X, Ym), where Yi, 1 ≤ i ≤ m, are available at the receiver as side information. In [1], Wyner and Ziv considered this problem for the special case of m = 1 and f1(X, Y1) = X and derived a rate-distortion function. Yamamoto extended this result in [2] to the case of having one general function f1(X, Y1). Both of these results were in terms of an auxiliary random variable. For the case of zero distortion, in [3], Orlitsky and Roche gave an interpretation of this variable in terms of properties of the characteristic graph which led to a particular coding scheme. This result was extended in [4] by providing an achievable scheme based on colorings of the characteristic graph. In a recent work, reference [5] has considered this problem for a general tree network where intermediate nodes are allowed to perform some computations. These previous works only considered the case where the receiver only wants to compute one function (m=1). Here, we want to consider the case in which the receiver wants to compute several functions with different side information random variables and zero distortion. Our results do not depend on the fact that all functions are desired in one receiver and one can apply them to the case of having several receivers with different desired functions (i.e., functions are separable). We define a new concept named the multi-functional graph entropy which is an extension of the graph entropy defined by Korner in [6]. We show that the minimum achievable rate for this problem is equal to the conditional multi-functional graph entropy of random variable X given side informations. We also propose a coding scheme based on graph colorings to achieve this rate.",
"The functional source coding problem in which the receiver side information (Has-set) and demands (Want-set) include functions of source messages is studied using row-Latin rectangle. The source transmits encoded messages, called the functional source code, in order to satisfy the receiver's demands. We obtain a minimum length using the row-Latin rectangle. Next, we consider the case of transmission errors and provide a necessary and sufficient condition that a functional source code must satisfy so that the receiver can correctly decode the values of the functions in its Want-set.",
"We consider the remote computation of a function of two sources where one is receiver side information. Specifically, given side information Y , we wish to compute f(X, Y) based on information transmitted by X over a noise-less channel. The goal is to characterize the minimal rate at which X must transmit information to enable the computation of the function f. Recently, Orlitsky and Roche (2001) established that the conditional graph entropy of the characteristic graph of the function is a solution to this problem. Their achievability scheme does not separate \"functional coding\" from the well understood distributed source coding problem. In this paper, we seek a separation between the functional coding and the coding for correlation. In other words, we want to preprocess X (with respect to f), and then transmit the preprocessed data using a standard Slepian-Wolf coding scheme at the Orlitsky-Roche rate. We establish that a minimum (conditional) entropy coloring of the product of characteristic graphs is asymptotically equal to the conditional graph entropy. This can be seen as a generalization of a result of Korner (1973) which shows that the minimum entropy coloring of product graphs is asymptotically equal to the graph entropy. Thus, graph coloring provides a modular technique to achieve coding gains when the receiver is interested in decoding some function of the source.",
"A sender communicates with a receiver who wishes to reliably evaluate a function of their combined data. We show that if only the sender can transmit, the number of bits required is a conditional entropy of a naturally defined graph. We also determine the number of bits needed when the communicators exchange two messages. Reference is made to the results of rate distortion in evaluating the function of two random variables."
]
} |
1510.04820 | 2220346341 | We consider the over an error-free broadcast network in which a source generates a set of messages and there are multiple receivers, each holding a set of functions of source messages in its cache, called the , and demands to know another set of functions of messages, called the . Cognizant of the receivers' , the source aims to satisfy the demands of each receiver by making coded transmissions, called a . The objective is to minimize the number of such transmissions required. The restriction a receiver's demands pose on the code is represented via a constraint called the and obtain a code using the constructed using these constraints. Bounds on the size of an optimal code based on the parameters of the confusion graph are presented. Next, we consider the case of erroneous transmissions and provide a necessary and sufficient condition that an FIC must satisfy for correct decoding of desired functions at each receiver and obtain a lower bound on the length of an error-correcting FIC. | Network coding problem has garnered much attention of the research community, see @cite_10 and references therein. The main advantage network coding offers is improvement in throughput by exploiting the fact that intermediate nodes can perform computation on incoming information rather than merely route them. Though ICP falls as a special case of a more general network coding problem, equivalence between the two has been shown in @cite_7 @cite_21 , , every index coding problem can be converted to an instance of network coding problem and vice versa. Network coding capacity by examining the corresponding index coding problem was studied in @cite_21 . The in-network function computation problem comprises source nodes generating messages, intermediate nodes performing computation on incoming information and sink nodes seeking functions of source messages @cite_27 @cite_16 . The aim is to maximize the frequency of target function computation per network use @cite_16 . This motivated to study ICP where clients' demands may also include functions of messages. | {
"cite_N": [
"@cite_7",
"@cite_21",
"@cite_27",
"@cite_16",
"@cite_10"
],
"mid": [
"2171844485",
"",
"2104496515",
"2129119419",
"108696806"
],
"abstract": [
"The index coding problem has recently attracted a significant attention from the research community due to its theoretical significance and applications in wireless ad hoc networks. An instance of the index coding problem includes a sender that holds a set of information messages X= x1,...,xk and a set of receivers R. Each receiver (x,H) in R needs to obtain a message x X and has prior side information consisting of a subset H of X . The sender uses a noiseless communication channel to broadcast encoding of messages in X to all clients. The objective is to find an encoding scheme that minimizes the number of transmissions required to satisfy the demands of all the receivers. In this paper, we analyze the relation between the index coding problem, the more general network coding problem, and the problem of finding a linear representation of a matroid. In particular, we show that any instance of the network coding and matroid representation problems can be efficiently reduced to an instance of the index coding problem. Our reduction implies that many important properties of the network coding and matroid representation problems carry over to the index coding problem. Specifically, we show that vector linear codes outperform scalar linear index codes and that vector linear codes are insufficient for achieving the optimum number of transmissions.",
"",
"We consider in-network computation of an arbitrary function over an arbitrary communication network. A network with capacity constraints on the links is given. Some nodes in the network generate data, e.g., like sensor nodes in a sensor network. An arbitrary function of this distributed data is to be obtained at a terminal node. The structure of the function is described by a given computation schema, which in turn is represented by a directed tree. We design computing and communicating schemes to obtain the function at the terminal at the maximum rate. For this, we formulate linear programs to determine network flows that maximize the computation rate. We then develop a fast combinatorial primal-dual algorithm to obtain near-optimal solutions to these linear programs. As a subroutine for this, we develop an algorithm for finding the minimum cost embedding of a tree in a network with any given set of link costs. We then briefly describe extensions of our techniques to the cases of multiple terminals wanting different functions, multiple computation schemas for a function, computation with a given desired precision, and to networks with energy constraints at nodes.",
"The following network computing problem is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the “computing capacity”. The network coding problem for a single-receiver network is a special case of the network computing problem in which all of the source messages must be reproduced at the receiver. For network coding with a single receiver, routing is known to achieve the capacity by achieving the network min-cut upper bound. We extend the definition of min-cut to the network computing problem and show that the min-cut is still an upper bound on the maximum achievable rate and is tight for computing (using coding) any target function in multi-edge tree networks. It is also tight for computing linear target functions in any network. We also study the bound's tightness for different classes of target functions. In particular, we give a lower bound on the computing capacity in terms of the Steiner tree packing number and a different bound for symmetric functions. We also show that for certain networks and target functions, the computing capacity can be less than an arbitrarily small fraction of the min-cut bound.",
"This book contains a thorough discussion of the classical topics in information theory together with the first comprehensive treatment of network coding, a subject first emerged under information theory in the mid 1990's that has now diffused into coding theory, computer networks, wireless communications, complexity theory, cryptography, graph theory, etc. With a large number of examples, illustrations, and original problems, this book is excellent as a textbook or reference book for a senior or graduate level course on the subject, as well as a reference for researchers in related fields."
]
} |
1510.05198 | 1770543141 | Inferring latent attributes of people online is an important social computing task, but requires integrating the many heterogeneous sources of information available on the web. We propose learning individual representations of people using neural nets to integrate rich linguistic and network evidence gathered from social media. The algorithm is able to combine diverse cues, such as the text a person writes, their attributes (e.g. gender, employer, education, location) and social relations to other people. We show that by integrating both textual and network evidence, these representations offer improved performance at four important tasks in social media inference on Twitter: predicting (1) gender, (2) occupation, (3) location, and (4) friendships for users. Our approach scales to large datasets and the learned representations can be used as general features in and have the potential to benefit a large number of downstream tasks including link prediction, community detection, or probabilistic reasoning over social networks. | Much work has been devoted to automatic user attribute inference given social signals. For example, @cite_69 @cite_13 @cite_15 @cite_53 @cite_62 @cite_65 @cite_10 @cite_11 focus on how to infer individual user attributes such as age, gender, political polarity, locations, occupation, educational information (e.g., major, year of matriculation) given user-generated contents or network information. | {
"cite_N": [
"@cite_69",
"@cite_62",
"@cite_10",
"@cite_53",
"@cite_65",
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"2017729405",
"2069090820",
"2250545651",
"91442942",
"2250747954",
"2250194349",
"1948823840",
"2253444830"
],
"abstract": [
"Social media outlets such as Twitter have become an important forum for peer interaction. Thus the ability to classify latent user attributes, including gender, age, regional origin, and political orientation solely from Twitter user language or similar highly informal content has important applications in advertising, personalization, and recommendation. This paper includes a novel investigation of stacked-SVM-based classification algorithms over a rich set of original features, applied to classifying these four user attributes. It also includes extensive analysis of features and approaches that are effective and not effective in classifying user attributes in Twitter-style informal written genres as distinct from the other primarily spoken genres previously studied in the user-property classification literature. Our models, singly and in ensemble, significantly outperform baseline models in all cases. A detailed analysis of model components and features provides an often entertaining insight into distinctive language-usage variation across gender, age, regional origin and political orientation in modern informal communication.",
"Location plays an essential role in our lives, bridging our online and offline worlds. This paper explores the interplay between people's location, interactions, and their social ties within a large real-world dataset. We present and evaluate Flap, a system that solves two intimately related tasks: link and location prediction in online social networks. For link prediction, Flap infers social ties by considering patterns in friendship formation, the content of people's messages, and user location. We show that while each component is a weak predictor of friendship alone, combining them results in a strong model, accurately identifying the majority of friendships. For location prediction, Flap implements a scalable probabilistic model of human mobility, where we treat users with known GPS positions as noisy sensors of the location of their friends. We explore supervised and unsupervised learning scenarios, and focus on the efficiency of both learning and inference. We evaluate Flap on a large sample of highly active users from two distinct geographical areas and show that it (1) reconstructs the entire friendship graph with high accuracy even when no edges are given; and (2) infers people's fine-grained location, even when they keep their data private and we can only access the location of their friends. Our models significantly outperform current comparable approaches to either task.",
"While user attribute extraction on social media has received considerable attention, existing approaches, mostly supervised, encounter great difficulty in obtaining gold standard data and are therefore limited to predicting unary predicates (e.g., gender). In this paper, we present a weaklysupervised approach to user profile extraction from Twitter. Users’ profiles from social media websites such as Facebook or Google Plus are used as a distant source of supervision for extraction of their attributes from user-generated text. In addition to traditional linguistic features used in distant supervision for information extraction, our approach also takes into account network information, a unique opportunity offered by social media. We test our algorithm on three attribute domains: spouse, education and job; experimental results demonstrate our approach is able to make accurate predictions for users’ attributes based on their tweets. 1",
"In this study we investigate how social media shape the networked public sphere and facilitate communication between communities with different political orientations. We examine two networks of political communication on Twitter, comprised of more than 250,000 tweets from the six weeks leading up to the 2010 U.S. congressional midterm elections. Using a combination of network clustering algorithms and manually-annotated data we demonstrate that the network of political retweets exhibits a highly segregated partisan structure, with extremely limited connectivity between left- and right-leaning users. Surprisingly this is not the case for the user-to-user mention network, which is dominated by a single politically heterogeneous cluster of users in which ideologically-opposed individuals interact at a much higher rate compared to the network of retweets. To explain the distinct topologies of the retweet and mention networks we conjecture that politically motivated individuals provoke interaction by injecting partisan content into information streams whose primary audience consists of ideologically-opposed users. We conclude with statistical evidence in support of this hypothesis.",
"Existing models for social media personal analytics assume access to thousands of messages per user, even though most users author content only sporadically over time. Given this sparsity, we: (i) leverage content from the local neighborhood of a user; (ii) evaluate batch models as a function of size and the amount of messages in various types of neighborhoods; and (iii) estimate the amount of time and tweets required for a dynamic model to predict user preferences. We show that even when limited or no selfauthored data is available, language from friend, retweet and user mention communications provide sufficient evidence for prediction. When updating models over time based on Twitter, we find that political preference can be often be predicted using roughly 100 tweets, depending on the context of user selection, where this could mean hours, or weeks, based on the author’s tweeting frequency.",
"While much work has considered the problem of latent attribute inference for users of social media such as Twitter, little has been done on non-English-based content and users. Here, we conduct the first assessment of latent attribute inference in languages beyond English, focusing on gender inference. We find that the gender inference problem in quite diverse languages can be addressed using existing machinery. Further, accuracy gains can be made by taking language-specific features into account. We identify languages with complex orthography, such as Japanese, as difficult for existing methods, suggesting a valuable direction for future research.",
"Automatically inferring user demographics from social media posts is useful for both social science research and a range of downstream applications in marketing and politics. We present the first extensive study where user behaviour on Twitter is used to build a predictive model of income. We apply non-linear methods for regression, i.e. Gaussian Processes, achieving strong correlation between predicted and actual user income. This allows us to shed light on the factors that characterise income on Twitter and analyse their interplay with user emotions and sentiment, perceived psycho-demographics and language use expressed through the topics of their posts. Our analysis uncovers correlations between different feature categories and income, some of which reflect common belief e.g. higher perceived education and intelligence indicates higher earnings, known differences e.g. gender and age differences, however, others show novel findings e.g. higher income users express more fear and anger, whereas lower income users express more of the time emotion and opinions.",
"Sociolinguistic studies investigate the relation between language and extra-linguistic variables. This requires both representative text data and the associated socio-economic meta-data of the subjects. Traditionally, sociolinguistic studies use small samples of hand-curated data and meta-data. This can lead to exaggerated or false conclusions. Using social media data offers a large-scale source of language data, but usually lacks reliable socio-economic meta-data. Our research aims to remedy both problems by exploring a large new data source, international review websites with user profiles. They provide more text data than manually collected studies, and more meta-data than most available social media text. We describe the data and present various pilot studies, illustrating the usefulness of this resource for sociolinguistic studies. Our approach can help generate new research hypotheses based on data-driven findings across several countries and languages."
]
} |
1510.05198 | 1770543141 | Inferring latent attributes of people online is an important social computing task, but requires integrating the many heterogeneous sources of information available on the web. We propose learning individual representations of people using neural nets to integrate rich linguistic and network evidence gathered from social media. The algorithm is able to combine diverse cues, such as the text a person writes, their attributes (e.g. gender, employer, education, location) and social relations to other people. We show that by integrating both textual and network evidence, these representations offer improved performance at four important tasks in social media inference on Twitter: predicting (1) gender, (2) occupation, (3) location, and (4) friendships for users. Our approach scales to large datasets and the learned representations can be used as general features in and have the potential to benefit a large number of downstream tasks including link prediction, community detection, or probabilistic reasoning over social networks. | Taking advantage of large scale user information, recent research has begun exploring logical reasoning over the social network (e.g., what's the probability that a New York City resident is a fan of the New York Knicks). Some work @cite_71 @cite_39 relies on logic reasoning paradigms such as Markov Logic Networks (MLNs) @cite_6 or Probabilistic Soft Logic (PSL) @cite_66 , though there are still many challenges related to scalability. Social network inference usually takes advantage of the fundamental propoety of homophily @cite_42 , which states that people sharing similar attributes have a higher chance of becoming friends Summarized by the proverb birds of a feather flock together" @cite_17 . , and conversely friends (or couples, or people living in the same location) tend to share more attributes. Such properties have been harnessed for applications. Somewhat more distantly related to our approach are tasks like community detection @cite_50 @cite_60 and user-link prediction @cite_4 @cite_67 @cite_70 @cite_41 . | {
"cite_N": [
"@cite_67",
"@cite_4",
"@cite_60",
"@cite_70",
"@cite_41",
"@cite_42",
"@cite_6",
"@cite_39",
"@cite_71",
"@cite_50",
"@cite_66",
"@cite_17"
],
"mid": [
"2046253692",
"2154851992",
"2012921801",
"2105543219",
"2139694940",
"2130354913",
"1977970897",
"2125297418",
"1599528172",
"1971421925",
"92973652",
"1014449310"
],
"abstract": [
"Social media such as blogs, Facebook, Flickr, etc., presents data in a network format rather than classical IID distribution. To address the interdependency among data instances, relational learning has been proposed, and collective inference based on network connectivity is adopted for prediction. However, connections in social media are often multi-dimensional. An actor can connect to another actor for different reasons, e.g., alumni, colleagues, living in the same city, sharing similar interests, etc. Collective inference normally does not differentiate these connections. In this work, we propose to extract latent social dimensions based on network information, and then utilize them as features for discriminative learning. These social dimensions describe diverse affiliations of actors hidden in the network, and the discriminative learning can automatically determine which affiliations are better aligned with the class labels. Such a scheme is preferred when multiple diverse relations are associated with the same network. We conduct extensive experiments on social media data (one from a real-world blog site and the other from a popular content sharing site). Our model outperforms representative relational learning methods based on collective inference, especially when few labeled data are available. The sensitivity of this model and its connection to existing methods are also examined.",
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",
"Community detection algorithms are fundamental tools that allow us to uncover organizational principles in networks. When detecting communities, there are two possible sources of information one can use: the network structure, and the features and attributes of nodes. Even though communities form around nodes that have common edges and common attributes, typically, algorithms have only focused on one of these two data modalities: community detection algorithms traditionally focus only on the network structure, while clustering algorithms mostly consider only node attributes. In this paper, we develop Communities from Edge Structure and Node Attributes (CESNA), an accurate and scalable algorithm for detecting overlapping communities in networks with node attributes. CESNA statistically models the interaction between the network structure and the node attributes, which leads to more accurate community detection as well as improved robustness in the presence of noise in the network structure. CESNA has a linear runtime in the network size and is able to process networks an order of magnitude larger than comparable approaches. Last, CESNA also helps with the interpretation of detected communities by finding relevant node attributes for each community.",
"The study of collective behavior is to understand how individuals behave in a social network environment. Oceans of data generated by social media like Facebook, Twitter, Flickr and YouTube present opportunities and challenges to studying collective behavior in a large scale. In this work, we aim to learn to predict collective behavior in social media. In particular, given information about some individuals, how can we infer the behavior of unobserved individuals in the same network? A social-dimension based approach is adopted to address the heterogeneity of connections presented in social media. However, the networks in social media are normally of colossal size, involving hundreds of thousands or even millions of actors. The scale of networks entails scalable learning of models for collective behavior prediction. To address the scalability issue, we propose an edge-centric clustering scheme to extract sparse social dimensions. With sparse social dimensions, the social-dimension based approach can efficiently handle networks of millions of actors while demonstrating comparable prediction performance as other non-scalable methods.",
"Network communities represent basic structures for understanding the organization of real-world networks. A community (also referred to as a module or a cluster) is typically thought of as a group of nodes with more connections amongst its members than between its members and the remainder of the network. Communities in networks also overlap as nodes belong to multiple clusters at once. Due to the difficulties in evaluating the detected communities and the lack of scalable algorithms, the task of overlapping community detection in large networks largely remains an open problem. In this paper we present BIGCLAM (Cluster Affiliation Model for Big Networks), an overlapping community detection method that scales to large networks of millions of nodes and edges. We build on a novel observation that overlaps between communities are densely connected. This is in sharp contrast with present community detection methods which implicitly assume that overlaps between communities are sparsely connected and thus cannot properly extract overlapping communities in networks. In this paper, we develop a model-based community detection algorithm that can detect densely overlapping, hierarchically nested as well as non-overlapping communities in massive networks. We evaluate our algorithm on 6 large social, collaboration and information networks with ground-truth community information. Experiments show state of the art performance both in terms of the quality of detected communities as well as in speed and scalability of our algorithm.",
"Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize...",
"We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach.",
"Many information-management tasks (including classification, retrieval, information extraction, and information integration) can be formalized as inference in an appropriate probabilistic first-order logic. However, most probabilistic first-order logics are not efficient enough for realistically-sized instances of these tasks. One key problem is that queries are typically answered by \"grounding\" the query---i.e., mapping it to a propositional representation, and then performing propositional inference---and with a large database of facts, groundings can be very large, making inference and learning computationally expensive. Here we present a first-order probabilistic language which is well-suited to approximate \"local\" grounding: in particular, every query @math can be approximately grounded with a small graph. The language is an extension of stochastic logic programs where inference is performed by a variant of personalized PageRank. Experimentally, we show that the approach performs well on an entity resolution task, a classification task, and a joint inference task; that the cost of inference is independent of database size; and that speedup in learning is possible by multi-threading.",
"We propose a framework for inferring the latent attitudes or preferences of users by performing probabilistic first-order logical reasoning over the social network graph. Our method answers questions about Twitter users like Does this user like sushi? or Is this user a New York Knicks fan? by building a probabilistic model that reasons over user attributes (the user's location or gender) and the social network (the user's friends and spouse), via inferences like homophily (I am more likely to like sushi if spouse or friends like sushi, I am more likely to like the Knicks if I live in New York). The algorithm uses distant supervision, semi-supervised data harvesting and vector space models to extract user attributes (e.g. spouse, education, location) and preferences (likes and dislikes) from text. The extracted propositions are then fed into a probabilistic reasoner (we investigate both Markov Logic and Probabilistic Soft Logic). Our experiments show that probabilistic logical reasoning significantly improves the performance on attribute and relation extraction, and also achieves an F-score of 0.791 at predicting a users likes or dislikes, significantly better than two strong baselines.",
"A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.",
"Probabilistic soft logic (PSL) is a framework for collective, probabilistic reasoning in relational domains. PSL uses first order logic rules as a template language for graphical models over random variables with soft truth values from the interval [0, 1]. Inference in this setting is a continuous optimization task, which can be solved efficiently. This paper provides an overview of the PSL language and its techniques for inference and weight learning. An implementation of PSL is available at http: psl.umiacs.umd.edu .",
"In this paper, we extend existing work on latent attribute inference by leveraging the principle of homophily: we evaluate the inference accuracy gained by augmenting the user features with features derived from the Twitter profiles and postings of her friends. We consider three attributes which have varying degrees of assortativity: gender, age, and political affiliation. Our approach yields a significant and robust increase in accuracy for both age and political affiliation, indicating that our approach boosts performance for attributes with moderate to high assortativity. Furthermore, different neighborhood subsets yielded optimal performance for different attributes, suggesting that different subsamples of the user's neighborhood characterize different aspects of the user herself. Finally, inferences using only the features of a user's neighbors outperformed those based on the user's features alone. This suggests that the neighborhood context alone carries substantial information about the user."
]
} |
1510.05198 | 1770543141 | Inferring latent attributes of people online is an important social computing task, but requires integrating the many heterogeneous sources of information available on the web. We propose learning individual representations of people using neural nets to integrate rich linguistic and network evidence gathered from social media. The algorithm is able to combine diverse cues, such as the text a person writes, their attributes (e.g. gender, employer, education, location) and social relations to other people. We show that by integrating both textual and network evidence, these representations offer improved performance at four important tasks in social media inference on Twitter: predicting (1) gender, (2) occupation, (3) location, and (4) friendships for users. Our approach scales to large datasets and the learned representations can be used as general features in and have the potential to benefit a large number of downstream tasks including link prediction, community detection, or probabilistic reasoning over social networks. | Much work has been devoted to automatic user attribute inference given social signals. For example, @cite_69 @cite_15 @cite_53 @cite_62 @cite_11 focus on how to infer individual user attributes such as age, gender, political polarity, locations, occupation, educational information (e.g., major, year of matriculation) given user-generated contents or network information. | {
"cite_N": [
"@cite_69",
"@cite_62",
"@cite_53",
"@cite_15",
"@cite_11"
],
"mid": [
"2017729405",
"2069090820",
"91442942",
"2250194349",
"2253444830"
],
"abstract": [
"Social media outlets such as Twitter have become an important forum for peer interaction. Thus the ability to classify latent user attributes, including gender, age, regional origin, and political orientation solely from Twitter user language or similar highly informal content has important applications in advertising, personalization, and recommendation. This paper includes a novel investigation of stacked-SVM-based classification algorithms over a rich set of original features, applied to classifying these four user attributes. It also includes extensive analysis of features and approaches that are effective and not effective in classifying user attributes in Twitter-style informal written genres as distinct from the other primarily spoken genres previously studied in the user-property classification literature. Our models, singly and in ensemble, significantly outperform baseline models in all cases. A detailed analysis of model components and features provides an often entertaining insight into distinctive language-usage variation across gender, age, regional origin and political orientation in modern informal communication.",
"Location plays an essential role in our lives, bridging our online and offline worlds. This paper explores the interplay between people's location, interactions, and their social ties within a large real-world dataset. We present and evaluate Flap, a system that solves two intimately related tasks: link and location prediction in online social networks. For link prediction, Flap infers social ties by considering patterns in friendship formation, the content of people's messages, and user location. We show that while each component is a weak predictor of friendship alone, combining them results in a strong model, accurately identifying the majority of friendships. For location prediction, Flap implements a scalable probabilistic model of human mobility, where we treat users with known GPS positions as noisy sensors of the location of their friends. We explore supervised and unsupervised learning scenarios, and focus on the efficiency of both learning and inference. We evaluate Flap on a large sample of highly active users from two distinct geographical areas and show that it (1) reconstructs the entire friendship graph with high accuracy even when no edges are given; and (2) infers people's fine-grained location, even when they keep their data private and we can only access the location of their friends. Our models significantly outperform current comparable approaches to either task.",
"In this study we investigate how social media shape the networked public sphere and facilitate communication between communities with different political orientations. We examine two networks of political communication on Twitter, comprised of more than 250,000 tweets from the six weeks leading up to the 2010 U.S. congressional midterm elections. Using a combination of network clustering algorithms and manually-annotated data we demonstrate that the network of political retweets exhibits a highly segregated partisan structure, with extremely limited connectivity between left- and right-leaning users. Surprisingly this is not the case for the user-to-user mention network, which is dominated by a single politically heterogeneous cluster of users in which ideologically-opposed individuals interact at a much higher rate compared to the network of retweets. To explain the distinct topologies of the retweet and mention networks we conjecture that politically motivated individuals provoke interaction by injecting partisan content into information streams whose primary audience consists of ideologically-opposed users. We conclude with statistical evidence in support of this hypothesis.",
"While much work has considered the problem of latent attribute inference for users of social media such as Twitter, little has been done on non-English-based content and users. Here, we conduct the first assessment of latent attribute inference in languages beyond English, focusing on gender inference. We find that the gender inference problem in quite diverse languages can be addressed using existing machinery. Further, accuracy gains can be made by taking language-specific features into account. We identify languages with complex orthography, such as Japanese, as difficult for existing methods, suggesting a valuable direction for future research.",
"Sociolinguistic studies investigate the relation between language and extra-linguistic variables. This requires both representative text data and the associated socio-economic meta-data of the subjects. Traditionally, sociolinguistic studies use small samples of hand-curated data and meta-data. This can lead to exaggerated or false conclusions. Using social media data offers a large-scale source of language data, but usually lacks reliable socio-economic meta-data. Our research aims to remedy both problems by exploring a large new data source, international review websites with user profiles. They provide more text data than manually collected studies, and more meta-data than most available social media text. We describe the data and present various pilot studies, illustrating the usefulness of this resource for sociolinguistic studies. Our approach can help generate new research hypotheses based on data-driven findings across several countries and languages."
]
} |
1510.05198 | 1770543141 | Inferring latent attributes of people online is an important social computing task, but requires integrating the many heterogeneous sources of information available on the web. We propose learning individual representations of people using neural nets to integrate rich linguistic and network evidence gathered from social media. The algorithm is able to combine diverse cues, such as the text a person writes, their attributes (e.g. gender, employer, education, location) and social relations to other people. We show that by integrating both textual and network evidence, these representations offer improved performance at four important tasks in social media inference on Twitter: predicting (1) gender, (2) occupation, (3) location, and (4) friendships for users. Our approach scales to large datasets and the learned representations can be used as general features in and have the potential to benefit a large number of downstream tasks including link prediction, community detection, or probabilistic reasoning over social networks. | Taking advantage of large scale user information, recent research has begun exploring logical reasoning over the social network (e.g., what's the probability that a New York City resident is a fan of the New York Knicks). Some work @cite_71 @cite_39 relies on logic reasoning paradigms such as Markov Logic Networks (MLNs) @cite_6 . | {
"cite_N": [
"@cite_6",
"@cite_71",
"@cite_39"
],
"mid": [
"1977970897",
"1599528172",
"2125297418"
],
"abstract": [
"We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach.",
"We propose a framework for inferring the latent attitudes or preferences of users by performing probabilistic first-order logical reasoning over the social network graph. Our method answers questions about Twitter users like Does this user like sushi? or Is this user a New York Knicks fan? by building a probabilistic model that reasons over user attributes (the user's location or gender) and the social network (the user's friends and spouse), via inferences like homophily (I am more likely to like sushi if spouse or friends like sushi, I am more likely to like the Knicks if I live in New York). The algorithm uses distant supervision, semi-supervised data harvesting and vector space models to extract user attributes (e.g. spouse, education, location) and preferences (likes and dislikes) from text. The extracted propositions are then fed into a probabilistic reasoner (we investigate both Markov Logic and Probabilistic Soft Logic). Our experiments show that probabilistic logical reasoning significantly improves the performance on attribute and relation extraction, and also achieves an F-score of 0.791 at predicting a users likes or dislikes, significantly better than two strong baselines.",
"Many information-management tasks (including classification, retrieval, information extraction, and information integration) can be formalized as inference in an appropriate probabilistic first-order logic. However, most probabilistic first-order logics are not efficient enough for realistically-sized instances of these tasks. One key problem is that queries are typically answered by \"grounding\" the query---i.e., mapping it to a propositional representation, and then performing propositional inference---and with a large database of facts, groundings can be very large, making inference and learning computationally expensive. Here we present a first-order probabilistic language which is well-suited to approximate \"local\" grounding: in particular, every query @math can be approximately grounded with a small graph. The language is an extension of stochastic logic programs where inference is performed by a variant of personalized PageRank. Experimentally, we show that the approach performs well on an entity resolution task, a classification task, and a joint inference task; that the cost of inference is independent of database size; and that speedup in learning is possible by multi-threading."
]
} |
1510.05198 | 1770543141 | Inferring latent attributes of people online is an important social computing task, but requires integrating the many heterogeneous sources of information available on the web. We propose learning individual representations of people using neural nets to integrate rich linguistic and network evidence gathered from social media. The algorithm is able to combine diverse cues, such as the text a person writes, their attributes (e.g. gender, employer, education, location) and social relations to other people. We show that by integrating both textual and network evidence, these representations offer improved performance at four important tasks in social media inference on Twitter: predicting (1) gender, (2) occupation, (3) location, and (4) friendships for users. Our approach scales to large datasets and the learned representations can be used as general features in and have the potential to benefit a large number of downstream tasks including link prediction, community detection, or probabilistic reasoning over social networks. | Social network inference usually takes advantage of the fundamental propoety of homophily @cite_42 , which states that people sharing similar attributes have a higher chance of becoming friends Summarized by the proverb birds of a feather flock together" @cite_17 . , and conversely friends (or couples, or people living in the same location) tend to share more attributes. Such properties have been harnessed for applications like community detection @cite_60 and user-link prediction @cite_4 @cite_67 . | {
"cite_N": [
"@cite_67",
"@cite_4",
"@cite_60",
"@cite_42",
"@cite_17"
],
"mid": [
"2046253692",
"2154851992",
"2012921801",
"2130354913",
"1014449310"
],
"abstract": [
"Social media such as blogs, Facebook, Flickr, etc., presents data in a network format rather than classical IID distribution. To address the interdependency among data instances, relational learning has been proposed, and collective inference based on network connectivity is adopted for prediction. However, connections in social media are often multi-dimensional. An actor can connect to another actor for different reasons, e.g., alumni, colleagues, living in the same city, sharing similar interests, etc. Collective inference normally does not differentiate these connections. In this work, we propose to extract latent social dimensions based on network information, and then utilize them as features for discriminative learning. These social dimensions describe diverse affiliations of actors hidden in the network, and the discriminative learning can automatically determine which affiliations are better aligned with the class labels. Such a scheme is preferred when multiple diverse relations are associated with the same network. We conduct extensive experiments on social media data (one from a real-world blog site and the other from a popular content sharing site). Our model outperforms representative relational learning methods based on collective inference, especially when few labeled data are available. The sensitivity of this model and its connection to existing methods are also examined.",
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",
"Community detection algorithms are fundamental tools that allow us to uncover organizational principles in networks. When detecting communities, there are two possible sources of information one can use: the network structure, and the features and attributes of nodes. Even though communities form around nodes that have common edges and common attributes, typically, algorithms have only focused on one of these two data modalities: community detection algorithms traditionally focus only on the network structure, while clustering algorithms mostly consider only node attributes. In this paper, we develop Communities from Edge Structure and Node Attributes (CESNA), an accurate and scalable algorithm for detecting overlapping communities in networks with node attributes. CESNA statistically models the interaction between the network structure and the node attributes, which leads to more accurate community detection as well as improved robustness in the presence of noise in the network structure. CESNA has a linear runtime in the network size and is able to process networks an order of magnitude larger than comparable approaches. Last, CESNA also helps with the interpretation of detected communities by finding relevant node attributes for each community.",
"Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize...",
"In this paper, we extend existing work on latent attribute inference by leveraging the principle of homophily: we evaluate the inference accuracy gained by augmenting the user features with features derived from the Twitter profiles and postings of her friends. We consider three attributes which have varying degrees of assortativity: gender, age, and political affiliation. Our approach yields a significant and robust increase in accuracy for both age and political affiliation, indicating that our approach boosts performance for attributes with moderate to high assortativity. Furthermore, different neighborhood subsets yielded optimal performance for different attributes, suggesting that different subsamples of the user's neighborhood characterize different aspects of the user herself. Finally, inferences using only the features of a user's neighbors outperformed those based on the user's features alone. This suggests that the neighborhood context alone carries substantial information about the user."
]
} |
1510.05198 | 1770543141 | Inferring latent attributes of people online is an important social computing task, but requires integrating the many heterogeneous sources of information available on the web. We propose learning individual representations of people using neural nets to integrate rich linguistic and network evidence gathered from social media. The algorithm is able to combine diverse cues, such as the text a person writes, their attributes (e.g. gender, employer, education, location) and social relations to other people. We show that by integrating both textual and network evidence, these representations offer improved performance at four important tasks in social media inference on Twitter: predicting (1) gender, (2) occupation, (3) location, and (4) friendships for users. Our approach scales to large datasets and the learned representations can be used as general features in and have the potential to benefit a large number of downstream tasks including link prediction, community detection, or probabilistic reasoning over social networks. | The proposed framework also focuses on attribute inference, which can be reframed as relation identification problems, i.e., whether a relation holds between a user and an entity. This work is thus related to a great deal of recent researches on relation inference (e.g., @cite_8 @cite_20 @cite_7 ). | {
"cite_N": [
"@cite_7",
"@cite_20",
"@cite_8"
],
"mid": [
"1852412531",
"2283196293",
"1934264538"
],
"abstract": [
"© 2013 Association for Computational Linguistics. Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a universal schema: the union of all involved schemas (surface form predicates as in OpenIE, and relations in the schemas of preexisting databases). This schema has an almost unlimited set of relations (due to surface forms), and supports integration with existing structured data (through the relation types of existing databases). To populate a database of such schema we present matrix factorization models that learn latent feature vectors for entity tuples and relations. We show that such latent models achieve substantially higher accuracy than a traditional classification approach. More importantly, by operating simultaneously on relations observed in text and in pre-existing structured DBs such as Freebase, we are able to reason about unstructured and structured data in mutually-supporting ways. By doing so our approach outperforms stateof- the-Art distant supervision.",
"We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.",
"Path queries on a knowledge graph can be used to answer compositional questions such as \"What languages are spoken by people living in Lisbon?\". However, knowledge graphs often have missing facts (edges) which disrupts path queries. Recent models for knowledge base completion impute missing facts by embedding knowledge graphs in vector spaces. We show that these models can be recursively applied to answer path queries, but that they suffer from cascading errors. This motivates a new \"compositional\" training objective, which dramatically improves all models' ability to answer path queries, in some cases more than doubling accuracy. On a standard knowledge base completion task, we also demonstrate that compositional training acts as a novel form of structural regularization, reliably improving performance across all base models (reducing errors by up to 43 ) and achieving new state-of-the-art results."
]
} |
1510.05198 | 1770543141 | Inferring latent attributes of people online is an important social computing task, but requires integrating the many heterogeneous sources of information available on the web. We propose learning individual representations of people using neural nets to integrate rich linguistic and network evidence gathered from social media. The algorithm is able to combine diverse cues, such as the text a person writes, their attributes (e.g. gender, employer, education, location) and social relations to other people. We show that by integrating both textual and network evidence, these representations offer improved performance at four important tasks in social media inference on Twitter: predicting (1) gender, (2) occupation, (3) location, and (4) friendships for users. Our approach scales to large datasets and the learned representations can be used as general features in and have the potential to benefit a large number of downstream tasks including link prediction, community detection, or probabilistic reasoning over social networks. | Techniques adopted for data collection are related to a strand of work in data harvesting and semantic bootstrapping @cite_31 . This line of work aims to leverage seeds to harvest structured data from text, which is used to learn additional rules or patterns to harvest more data and so on @cite_32 @cite_44 @cite_51 @cite_63 @cite_45 . Distant supervision is a related methodology for data harvesting @cite_30 @cite_52 @cite_19 that leverages large structured databases such as Freebase as an indirect source of supervision for learning to extract information from text. | {
"cite_N": [
"@cite_30",
"@cite_32",
"@cite_52",
"@cite_44",
"@cite_19",
"@cite_45",
"@cite_63",
"@cite_31",
"@cite_51"
],
"mid": [
"1954715867",
"2098688754",
"2132679783",
"2019496358",
"2107598941",
"197270748",
"1514969207",
"2103931177",
"2107322005"
],
"abstract": [
"Recently, there has been much effort in making databases for Inolecular biology more accessible osld interoperable. However, information in text. form, such as MEDLINE records, remains a greatly underutilized source of biological information. We have begun a research effort aimed at automatically mapping information from text. sources into structured representations, such as knowledge bases. Our approach to this task is to use machine-learning methods to induce routines for extracting facts from text. We describe two learning methods that we have applied to this task -a statistical text classification method, and a relational learning method -and our initial experiments in learning such information-extraction routines. We also present an approach to decreasing the cost of learning information-extraction routines by learning from \"weakly\" labeled training data.",
"We present a web mining method for discovering and enhancing relationships in which a specified concept (word class) participates. We discover a whole range of relationships focused on the given concept, rather than generic known relationships as in most previous work. Our method is based on clustering patterns that contain concept words and other words related to them. We evaluate the method on three different rich concepts and find that in each case the method generates a broad variety of relationships with good precision.",
"Information extraction (IE) holds the promise of generating a large-scale knowledge base from the Web's natural language text. Knowledge-based weak supervision, using structured data to heuristically label a training corpus, works towards this goal by enabling the automated learning of a potentially unbounded number of relation extractors. Recently, researchers have developed multi-instance learning algorithms to combat the noisy training data that can come from heuristic labeling, but their models assume relations are disjoint --- for example they cannot extract the pair Founded(Jobs, Apple) and CEO-of(Jobs, Apple). This paper presents a novel approach for multi-instance learning with overlapping relations that combines a sentence-level extraction model with a simple, corpus-level component for aggregating the individual facts. We apply our model to learn extractors for NY Times text using weak supervision from Free-base. Experiments show that the approach runs quickly and yields surprising gains in accuracy, at both the aggregate and sentence level.",
"Various techniques have been developed to automatically induce semantic dictionaries from text corpora and from the Web. Our research combines corpus-based semantic lexicon induction with statistics acquired from the Web to improve the accuracy of automatically acquired domain-specific dictionaries. We use a weakly supervised bootstrapping algorithm to induce a semantic lexicon from a text corpus, and then issue Web queries to generate co-occurrence statistics between each lexicon entry and semantically related terms. The Web statistics provide a source of independent evidence to confirm, or disconfirm, that a word belongs to the intended semantic category. We evaluate this approach on 7 semantic categories representing two domains. Our results show that the Web statistics dramatically improve the ranking of lexicon entries, and can also be used to filter incorrect entries.",
"Modern models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora. We investigate an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACE-style algorithms, and allowing the use of corpora of any size. Our experiments use Freebase, a large semantic database of several thousand relations, to provide distant supervision. For each pair of entities that appears in some Freebase relation, we find all sentences containing those entities in a large unlabeled corpus and extract textual features to train a relation classifier. Our algorithm combines the advantages of supervised IE (combining 400,000 noisy pattern features in a probabilistic classifier) and unsupervised IE (extracting large numbers of relations from large corpora of any domain). Our model is able to extract 10,000 instances of 102 relations at a precision of 67.6 . We also analyze feature performance, showing that syntactic parse features are particularly helpful for relations that are ambiguous or lexically distant in their expression.",
"Information extraction systems usually require two dictionaries: a semantic lexicon and a dictionary of extraction patterns for the domain. We present a multilevel bootstrapping algorithm that generates both the semantic lexicon and extraction patterns simultaneously. As input, our technique requires only unannotated training texts and a handful of seed words for a category. We use a mutual bootstrapping technique to alternately select the best extraction pattern for the category and bootstrap its extractions into the semantic lexicon, which is the basis for selecting the next extraction pattern. To make this approach more robust, we add a second level of bootstrapping (metabootstrapping) that retains only the most reliable lexicon entries produced by mutual bootstrapping and then restarts the process. We evaluated this multilevel bootstrapping technique on a collection of corporate web pages and a corpus of terrorism news articles. The algorithm produced high-quality dictionaries for several semantic categories.",
"Open-class semantic lexicon induction is of great interest for current knowledge harvesting algorithms. We propose a general framework that uses patterns in bootstrapping fashion to learn open-class semantic lexicons for different kinds of relations. These patterns require seeds. To estimate the goodness (the potential yield) of new seeds, we introduce a regression model that considers the connectivity behavior of the seed during bootstrapping. The generalized regression model is evaluated on six different kinds of relations with over 10000 different seeds for English and Spanish patterns. Our approach reaches robust performance of 90 correlation coefficient with 15 error rate for any of the patterns when predicting the goodness of seeds.",
"Text documents often contain valuable structured data that is hidden Yin regular English sentences. This data is best exploited infavailable as arelational table that we could use for answering precise queries or running data mining tasks.We explore a technique for extracting such tables from document collections that requires only a handful of training examples from users. These examples are used to generate extraction patterns, that in turn result in new tuples being extracted from the document collection.We build on this idea and present our Snowball system. Snowball introduces novel strategies for generating patterns and extracting tuples from plain-text documents.At each iteration of the extraction process, Snowball evaluates the quality of these patterns and tuples without human intervention,and keeps only the most reliable ones for the next iteration. In this paper we also develop a scalable evaluation methodology and metrics for our task, and present a thorough experimental evaluation of Snowball and comparable techniques over a collection of more than 300,000 newspaper documents.",
"A challenging problem in open information extraction and text mining is the learning of the selectional restrictions of semantic relations. We propose a minimally supervised bootstrapping algorithm that uses a single seed and a recursive lexico-syntactic pattern to learn the arguments and the supertypes of a diverse set of semantic relations from the Web. We evaluate the performance of our algorithm on multiple semantic relations expressed using \"verb\", \"noun\", and \"verb prep\" lexico-syntactic patterns. Human-based evaluation shows that the accuracy of the harvested information is about 90 . We also compare our results with existing knowledge base to outline the similarities and differences of the granularity and diversity of the harvested knowledge."
]
} |
1510.05198 | 1770543141 | Inferring latent attributes of people online is an important social computing task, but requires integrating the many heterogeneous sources of information available on the web. We propose learning individual representations of people using neural nets to integrate rich linguistic and network evidence gathered from social media. The algorithm is able to combine diverse cues, such as the text a person writes, their attributes (e.g. gender, employer, education, location) and social relations to other people. We show that by integrating both textual and network evidence, these representations offer improved performance at four important tasks in social media inference on Twitter: predicting (1) gender, (2) occupation, (3) location, and (4) friendships for users. Our approach scales to large datasets and the learned representations can be used as general features in and have the potential to benefit a large number of downstream tasks including link prediction, community detection, or probabilistic reasoning over social networks. | Our approach is also related to work on Multi-task learning @cite_21 @cite_9 @cite_46 . We learn latent representations that are jointly optimized for a diverse set of predictive tasks (in our case: user relations, text language models and their social relations). By jointly learning from a diverse set of social signals, we are able to demonstrate consistent performance improvements across a range of predictive tasks. | {
"cite_N": [
"@cite_46",
"@cite_9",
"@cite_21"
],
"mid": [
"2117130368",
"",
"2133013156"
],
"abstract": [
"We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.",
"",
"This paper investigates learning in a lifelong context. Lifelong learning addresses situations in which a learner faces a whole stream of learning tasks. Such scenarios provide the opportunity to transfer knowledge across multiple learning tasks, in order to generalize more accurately from less training data. In this paper, several different approaches to lifelong learning are described, and applied in an object recognition domain. It is shown that across the board, lifelong learning approaches generalize consistently more accurately from less training data, by their ability to transfer knowledge across learning tasks."
]
} |
1510.05198 | 1770543141 | Inferring latent attributes of people online is an important social computing task, but requires integrating the many heterogeneous sources of information available on the web. We propose learning individual representations of people using neural nets to integrate rich linguistic and network evidence gathered from social media. The algorithm is able to combine diverse cues, such as the text a person writes, their attributes (e.g. gender, employer, education, location) and social relations to other people. We show that by integrating both textual and network evidence, these representations offer improved performance at four important tasks in social media inference on Twitter: predicting (1) gender, (2) occupation, (3) location, and (4) friendships for users. Our approach scales to large datasets and the learned representations can be used as general features in and have the potential to benefit a large number of downstream tasks including link prediction, community detection, or probabilistic reasoning over social networks. | Our work is inspired by classic work on spectral learning for graphs e.g., @cite_68 @cite_5 and on recent research @cite_4 @cite_72 that learn embedded representations for a graph's vertices. Our model extends this work by modeling not only user-user network graphs, but also incorporating diverse social signals including unstructured text, user attributes, and relations, enabling more sophisticated inferences and offering an integrated model of homophily in social relations. | {
"cite_N": [
"@cite_72",
"@cite_5",
"@cite_68",
"@cite_4"
],
"mid": [
"1888005072",
"2055334398",
"2161902954",
"2154851992"
],
"abstract": [
"This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online https: github.com tangjianpku LINE .",
"A generalized (molecular) graph-theoretical matrix is introduced in such a way that the adjacency and distance matrices of (molecular) graphs are particular cases of it. By using this matrix, two new molecular graph-theoretical vectors and a vector–matrix–vector multiplication procedure, a generalized invariant is introduced. This invariant permits a generalization of some of the classical topological indices (TIs). Thus, Wiener W index, Zagreb M1 and M2 indices, Balaban J index, Harary H number, Randic χ index and valence connectivity index χv resulted as particular cases of an infinite set of molecular descriptors that can be derived from the same invariant.",
"We present a unified framework for learning link prediction and edge weight prediction functions in large networks, based on the transformation of a graph's algebraic spectrum. Our approach generalizes several graph kernels and dimensionality reduction methods and provides a method to estimate their parameters efficiently. We show how the parameters of these prediction functions can be learned by reducing the problem to a one-dimensional regression problem whose runtime only depends on the method's reduced rank and that can be inspected visually. We derive variants that apply to undirected, weighted, unweighted, unipartite and bipartite graphs. We evaluate our method experimentally using examples from social networks, collaborative filtering, trust networks, citation networks, authorship graphs and hyperlink networks.",
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection."
]
} |
1510.04347 | 2951937026 | Regular Path Queries (RPQs) are a type of graph query where answers are pairs of nodes connected by a sequence of edges matching a regular expression. We study the techniques to process such queries on a distributed graph of data. While many techniques assume the location of each data element (node or edge) is known, when the components of the distributed system are autonomous, the data will be arbitrarily distributed. As the different query processing strategies are equivalently costly in the worst case, we isolate query-dependent cost factors and present a method to choose between strategies, using new query cost estimation techniques. We evaluate our techniques using meaningful queries on biomedical data. | Several data structures have also been proposed to summarize" graph data, which allows a user to view the structure of the data and the existing paths to be queried, and can also be used to optimize query execution by pruning the search graph @cite_23 @cite_8 @cite_7 . These techniques mainly give indications of whether paths exist, with no direct applications to cost estimation. | {
"cite_N": [
"@cite_7",
"@cite_23",
"@cite_8"
],
"mid": [
"1983604014",
"2134356404",
"263916289"
],
"abstract": [
"To facilitate queries over semi-structured data, various structural summaries have been proposed. Structural summaries are derived directly from the data and serve as indices for evaluating path expressions on semi-structured or XML data. We introduce the D(k) index, an adaptive structural summary for general graph structured documents. Building on previous work, 1-index and A(k) index, the D(k)-index is also based on the concept of bisimilarity. However, as a generalization of the 1-index and A(k)-index, the D(k) index possesses the adaptive ability to adjust its structure according to the current query load. This dynamism also facilitates efficient update algorithms, which are crucial to practical applications of structural indices, but have not been adequately addressed in previous index proposals. Our experiments show that the D(k) index is a more effective structural summary than previous static ones, as a result of its query load sensitivity. In addition, update operations on the D(k) index can be performed more efficiently than on its predecessors.",
"In semistructured databases there is no schema fixed in advance. To provide the benefits of a schema in such environments, we introduce DataGuides: concise and accurate structural summaries of semistructured databases. DataGuides serve as dynamic schemas, generated from the database; they are useful for browsing database structure, formulating queries, storing information such as statistics and sample values, and enabling query optimization. This paper presents the theoretical foundations of DataGuides along with an algorithm for their creation and an overview of incremental maintenance. We provide performance results based on our implementation of DataGuides in the Lore DBMS for semistructured data. We also describe the use of DataGuides in Lore, both in the user interface to enable structure browsing and query formulation, and as a means of guiding the query processor and optimizing query execution.",
""
]
} |
1510.04422 | 2241674424 | Data are essential for the experiments of relevant scientific publication recommendation methods but it is difficult to build ground truth data. A naturally promising solution is using publications that are referenced by researchers to build their ground truth data. Unfortunately, this approach has not been explored in the literature, so its applicability is still a gap in our knowledge. In this research, we systematically study this approach by theoretical and empirical analyses. In general, the results show that this approach is reasonable and has many advantages. However, the empirical analysis shows both positive and negative results. We conclude that, in some situations, this is a useful alternative approach toward overcoming data limitation. Based on this approach, we build and publish a dataset in computer science domain to help advancing other researches. | The emerging trend in relevant scientific publication recommendation research has raised an increasing attention to ground truth data @cite_8 @cite_0 @cite_4 . Many approaches have been proposed but all of them have flaws. Some researchers manually build ground truth data by surveying a group of researchers. With a high cost, these datasets are usually small, e.g., with only 28 researchers @cite_8 . Moreover, due to privacy issues, these datasets are not fully shared. Some researchers use datasets adapted from reference management systems, e.g., Mendeley and Docear. However, ground truth data in these datasets are not guaranteed to represent researchers' real research interests. Another approach to build dataset is crowdsourcing but it has not been used in relevant scientific publication recommendation area and requires complicated systems. A promising approach is building ground truth data based on reference data. However, to the best of our knowledge, no research has been done on assessing its applicability. In this paper, we systematically explore this approach. | {
"cite_N": [
"@cite_0",
"@cite_4",
"@cite_8"
],
"mid": [
"1511181855",
"2043778286",
"2062340319"
],
"abstract": [
"In this work, we propose a new approach for discovering various relationships among keywords over the scientific publications based on a Markov Chain model. It is an important problem since keywords are the basic elements for representing abstract objects such as documents, user profiles, topics and many things else. Our model is very effective since it combines four important factors in scientific publications: content, publicity, impact and randomness. Particularly, a recommendation system (called SciRecSys) has been presented to support users to efficiently find out relevant articles.",
"An online-browsing support system for research papers has been developed that extracts technical terms from a paper and presents links to useful pages such as those explaining the terms. A method to further use the extracted technical terms is proposed to recommend papers to a user that are related to the paper he or she is browsing. Specifically, the proposed method generates a bipartite graph consisting of papers retrieved by the extracted technical terms, which are called related papers, and technical terms appearing in these related papers. It then ranks the related papers using the HITS algorithm for analyzing the bipartite graph and recommends top-ranked papers to the user. The proposed method was compared with other recommendation methods in terms of effectiveness in an experiment.",
"We examine the effect of modeling a researcher's past works in recommending scholarly papers to the researcher. Our hypothesis is that an author's published works constitute a clean signal of the latent interests of a researcher. A key part of our model is to enhance the profile derived directly from past works with information coming from the past works' referenced papers as well as papers that cite the work. In our experiments, we differentiate between junior researchers that have only published one paper and senior researchers that have multiple publications. We show that filtering these sources of information is advantageous -- when we additionally prune noisy citations, referenced papers and publication history, we achieve statistically significant higher levels of recommendation accuracy."
]
} |
1510.04565 | 2285251378 | We address the problem of generating video features for action recognition. The spatial pyramid and its variants have been very popular feature models due to their success in balancing spatial location encoding and spatial invariance. Although it seems straightforward to extend spatial pyramid to the temporal domain (spatio-temporal pyramid), the large spatio-temporal diversity of unconstrained videos and the resulting significantly higher dimensional representations make it less appealing. This paper introduces the space-time extended descriptor, a simple but efficient alternative way to include the spatio-temporal location into the video features. Instead of only coding motion information and leaving the spatio-temporal location to be represented at the pooling stage, location information is used as part of the encoding step. This method is a much more effective and efficient location encoding method as compared to the fixed grid model because it avoids the danger of over committing to artificial boundaries and its dimension is relatively low. Experimental results on several benchmark datasets show that, despite its simplicity, this method achieves comparable or better results than spatio-temporal pyramid. | Video retrieval research is conducted in a diverse setting where emphasis includes low-level and high-level feature design @cite_17 @cite_12 , multi-modality fusion @cite_27 @cite_13 and multi-modality retrieval models @cite_19 . Here we focus our review on low-level features design and encoding which is related to our latter experimental comparison. There has been a large amount of work in trying to build representations that keep spatial information of image patterns. Among them, spatial pyramid matching @cite_4 is the most popular one. However, building spatial pyramids requires dimensions that are orders of magnitude higher than the original spatial invariant representations and hence make it less suitable for high dimensional encoding methods such as Fisher vector @cite_6 and VLAD @cite_8 . Spatial Fisher vector @cite_2 and spatial augmentation @cite_0 @cite_15 provide more compact representations to encode spatial information and show similar performance as spatial pyramid methods. Few approaches consider encoding global temporal information into video representations. @cite_3 show that better action recognition performance can be achieved by dividing videos into two parts and encoding each one separately. @cite_29 try to use temporal pyramid for event detection. They use @math temporal segments, where @math incrementally increases from 1 to 10. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_29",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2162915993",
"2103924867",
"2032135266",
"",
"",
"1522233202",
"2013075750",
"2917011293",
"2032982774",
"",
"",
"",
""
],
"abstract": [
"This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralbas \"gist\" and Lowes SIFT descriptors.",
"The objective of this paper is large scale object instance retrieval, given a query image. A starting point of such systems is feature detection and description, for example using SIFT. The focus of this paper, however, is towards very large scale retrieval where, due to storage requirements, very compact image descriptors are required and no information about the original SIFT descriptors can be accessed directly at run time. We start from VLAD, the state-of-the art compact descriptor introduced by for this purpose, and make three novel contributions: first, we show that a simple change to the normalization method significantly improves retrieval performance, second, we show that vocabulary adaptation can substantially alleviate problems caused when images are added to the dataset after initial vocabulary learning. These two methods set a new state-of-the-art over all benchmarks investigated here for both mid-dimensional (20k-D to 30k-D) and small (128-D) descriptors. Our third contribution is a multiple spatial VLAD representation, MultiVLAD, that allows the retrieval and localization of objects that only extend over a small part of an image (again without requiring use of the original image SIFT descriptors).",
"In this study, we present a system for video event classification that generates a temporal pyramid of static visual semantics using minimum-value, maximum-value, and average-value aggregation techniques. Kernel optimization and model subspace boosting are then applied to customize the pyramid for each event. SVM models are independently trained for each level in the pyramid using kernel selection according to 3-fold cross-validation. Kernels that both enforce static temporal order and permit temporal alignment are evaluated. Model subspace boosting is used to select the best combination of pyramid levels and aggregation techniques for each event. The NIST TRECVID Multimedia Event Detection (MED) 2011 dataset was used for evaluation. Results demonstrate that kernel optimizations using both temporally static and dynamic kernels together achieves better performance than any one particular method alone. In addition, model sub-space boosting reduces the size of the model by 80 , while maintaining 96 of the performance gain.",
"",
"",
"The spatial pyramid and its variants have been among the most popular and successful models for object recognition. In these models, local visual features are coded across elements of a visual vocabulary, and then these codes are pooled into histograms at several spatial granularities. We introduce spatially local coding, an alternative way to include spatial information in the image model. Instead of only coding visual appearance and leaving the spatial coherence to be represented by the pooling stage, we include location as part of the coding step. This is a more flexible spatial representation as compared to the fixed grids used in the spatial pyramid models and we can use a simple, whole-image region during the pooling stage. We demonstrate that combining features with multiple levels of spatial locality performs better than using just a single level. Our model performs better than all previous single-feature methods when tested on the Caltech 101 and 256 object recognition datasets.",
"We propose a novel method MultiModal Pseudo Relevance Feedback (MMPRF) for event search in video, which requires no search examples from the user. Pseudo Relevance Feedback has shown great potential in retrieval tasks, but previous works are limited to unimodal tasks with only a single ranked list. To tackle the event search task which is inherently multimodal, our proposed MMPRF takes advantage of multiple modalities and multiple ranked lists to enhance event search performance in a principled way. The approach is unique in that it leverages not only semantic features, but also non-semantic low-level features for event search in the absence of training data. Evaluated on the TRECVID MEDTest dataset, the approach improves the baseline by up to 158 in terms of the mean average precision. It also significantly contributes to CMU Team's final submission in TRECVID-13 Multimedia Event Detection.",
"In this paper we describe our participation in the NIST TRECVID-2003 evaluation. We participated in four tasks of the benchmark including shot boundary detection, high-level feature detection, story segmentation, and search. We describe the different runs we submitted for each track and discuss our performance.",
"Several state-of-the-art image representations consist in averaging local statistics computed from patch-level descriptors. It has been shown by that such average statistics suffer from two sources of variance. The first one comes from the fact that a finite set of local statistics are averaged. The second one is due to the variation in the proportion of object-dependent information between different images of the same class. For the problem of object classification, these sources of variance affect negatively the accuracy since they increase the overlap between class-conditional probabilities. Our goal is to include information about the spatial layout of images in image signatures based on average statistics. We show that the traditional approach to including the spatial layout - the spatial pyramid (SP) - increases the first source of variance while only weakly reducing the second one. We therefore propose two complementary approaches to account for the spatial layout which are compatible with our goal of variance reduction. The first one models the spatial layout in an image-independent manner (as is the case of the SP) while the second one adapts to the image content. A significant benefit of these approaches with respect to the SP is that they do not incur an increase of the image signature dimensionality. We show on PASCAL VOC 2007, 2008 and 2009 the benefits of our approach.",
"",
"",
"",
""
]
} |
1510.04445 | 2950275949 | In this paper we evaluate the quality of the activation layers of a convolutional neural network (CNN) for the gen- eration of object proposals. We generate hypotheses in a sliding-window fashion over different activation layers and show that the final convolutional layers can find the object of interest with high recall but poor localization due to the coarseness of the feature maps. Instead, the first layers of the network can better localize the object of interest but with a reduced recall. Based on this observation we design a method for proposing object locations that is based on CNN features and that combines the best of both worlds. We build an inverse cascade that, going from the final to the initial convolutional layers of the CNN, selects the most promising object locations and refines their boxes in a coarse-to-fine manner. The method is efficient, because i) it uses the same features extracted for detection, ii) it aggregates features using integral images, and iii) it avoids a dense evaluation of the proposals due to the inverse coarse-to-fine cascade. The method is also accurate; it outperforms most of the previously proposed object proposals approaches and when plugged into a CNN-based detector produces state-of-the- art detection performance. | An alternative approach to sliding-window methods is segmentation-based algorithms. This approach applies to the multiple levels of segmentation and then merge the generated segments in order to generate objects proposals @cite_18 @cite_5 @cite_19 @cite_8 . More specifically, selective search @cite_8 hierarchically aggregates multiple segmentations in a bottom-up greedy manner without involving any learning procedure, but based on low level cues, such as color and texture. Multiscale Combinatorial Grouping (MCG) @cite_18 extracts multiscale segmentations and merges them by using the edge strength in order to generate objects hypotheses. Carreira al @cite_5 propose to segment the object of interest based on graphcut. It produces segments from randomly generated seeds. As in selective search, each segment represents a proposal bounding box. Randomized Prim's @cite_19 uses the same segmentation strategy as selective search. However, instead of merging the segments in a greedy manner it learns the probabilities for merging, and uses those to speed up the procedure. Geodesic object proposals @cite_10 are based on classifiers that place seeds for a geodesic distance transform on an over-segmented image. | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_19",
"@cite_5",
"@cite_10"
],
"mid": [
"1991367009",
"1577168949",
"2121660792",
"2046382188",
"261873710"
],
"abstract": [
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.",
"",
"Generic object detection is the challenging task of proposing windows that localize all the objects in an image, regardless of their classes. Such detectors have recently been shown to benefit many applications such as speeding-up class-specific object detection, weakly supervised learning of object detectors and object discovery. In this paper, we introduce a novel and very efficient method for generic object detection based on a randomized version of Prim's algorithm. Using the connectivity graph of an image's super pixels, with weights modelling the probability that neighbouring super pixels belong to the same object, the algorithm generates random partial spanning trees with large expected sum of edge weights. Object localizations are proposed as bounding-boxes of those partial trees. Our method has several benefits compared to the state-of-the-art. Thanks to the efficiency of Prim's algorithm, it samples proposals very quickly: 1000 proposals are obtained in about 0.7s. With proposals bound to super pixel boundaries yet diversified by randomization, it yields very high detection rates and windows that tightly fit objects. In extensive experiments on the challenging PASCAL VOC 2007 and 2012 and SUN2012 benchmark datasets, we show that our method improves over state-of-the-art competitors for a wide range of evaluation scenarios.",
"We present a novel framework to generate and rank plausible hypotheses for the spatial extent of objects in images using bottom-up computational processes and mid-level selection cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge of the properties of individual object classes, by solving a sequence of Constrained Parametric Min-Cut problems (CPMC) on a regular image grid. In a subsequent step, we learn to rank the corresponding segments by training a continuous model to predict how likely they are to exhibit real-world regularities (expressed as putative overlap with ground truth) based on their mid-level region properties, then diversify the estimated overlap score using maximum marginal relevance measures. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC 2009 and 2010 data sets. In our companion papers [1], [2], we show that the algorithm can be used, successfully, in a segmentation-based visual object category recognition pipeline. This architecture ranked first in the VOC2009 and VOC2010 image segmentation and labeling challenges.",
"We present an approach for identifying a set of candidate objects in a given image. This set of candidates can be used for object recognition, segmentation, and other object-based image parsing tasks. To generate the proposals, we identify critical level sets in geodesic distance transforms computed for seeds placed in the image. The seeds are placed by specially trained classifiers that are optimized to discover objects. Experiments demonstrate that the presented approach achieves significantly higher accuracy than alternative approaches, at a fraction of the computational cost."
]
} |
1510.04501 | 2949175204 | This paper presents an approach for metadata reconciliation, curation and linking for Open Governamental Data Portals (ODPs). ODPs have been lately the standard solution for governments willing to put their public data available for the society. Portal managers use several types of metadata to organize the datasets, one of the most important ones being the tags. However, the tagging process is subject to many problems, such as synonyms, ambiguity or incoherence, among others. As our empiric analysis of ODPs shows, these issues are currently prevalent in most ODPs and effectively hinders the reuse of Open Data. In order to address these problems, we develop and implement an approach for tag reconciliation in Open Data Portals, encompassing local actions related to individual portals, and global actions for adding a semantic metadata layer above individual portals. The local part aims to enhance the quality of tags in a single portal, and the global part is meant to interlink ODPs by establishing relations between tags. | In relation to the metrics for tagging environments, some related ideas could be found in the literature. For example, @cite_17 presents a framework to evaluate the quality of ODPs. Among the applied quality metrics, three of them -- , and -- are related to metadata keys, which tags are part of. establishes which metadata keys are actually used in a portal; evaluates the presence of non empty values; and checks if metadata adequately describes the data. However, this metric is not applied for tags. | {
"cite_N": [
"@cite_17"
],
"mid": [
"1937352621"
],
"abstract": [
"Despite the enthusiasm caused by the availability of a steadily increasing amount of openly available, structured data, first critical voices appear addressing the emerging issue of low quality in the meta data and data source of Open Data portals which is a serious risk that could disrupt the Open Data project. However, there exist no comprehensive reports about the actual quality of Open Data portals. In this work, we present our efforts to monitor and assess the quality of 82 active Open Data portals, powered by organisations across 35 different countries. We discuss our quality metrics and report comprehensive findings by analysing the data and the evolution of the portals since September 2014. Our results include findings about a steady growth of information, a high heterogeneity across the portals for various aspects and also insights on openness, contact ability and the availability of meta data."
]
} |
1510.04501 | 2949175204 | This paper presents an approach for metadata reconciliation, curation and linking for Open Governamental Data Portals (ODPs). ODPs have been lately the standard solution for governments willing to put their public data available for the society. Portal managers use several types of metadata to organize the datasets, one of the most important ones being the tags. However, the tagging process is subject to many problems, such as synonyms, ambiguity or incoherence, among others. As our empiric analysis of ODPs shows, these issues are currently prevalent in most ODPs and effectively hinders the reuse of Open Data. In order to address these problems, we develop and implement an approach for tag reconciliation in Open Data Portals, encompassing local actions related to individual portals, and global actions for adding a semantic metadata layer above individual portals. The local part aims to enhance the quality of tags in a single portal, and the global part is meant to interlink ODPs by establishing relations between tags. | Laniado and Mika did a similar analysis over hashtags on Twitter @cite_2 . Their work is focused in answering if Twitter hashtags constitute for the semantic web. To achieve this, four metrics are used: frequency of hashtags; specificity, which is the deviation from the use of them without being a hashtag; consistency; and stability over time. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1569089799"
],
"abstract": [
"Twitter enjoys enormous popularity as a micro-blogging service largely due to its simplicity. On the downside, there is little organization to the Twitterverse and making sense of the stream of messages passing through the system has become a significant challenge for everyone involved. As a solution, Twitter users have adopted the convention of adding a hash at the beginning of a word to turn it into a hashtag. Hashtags have become the means in Twitter to create threads of conversation and to build communities around particular interests. In this paper, we take a first look at whether hashtags behave as strong identifiers, and thus whether they could serve as identifiers for the Semantic Web. We introduce some metrics that can help identify hashtags that show the desirable characteristics of strong identifiers. We look at the various ways in which hashtags are used, and show through evaluation that our metrics can be applied to detect hashtags that represent real world entities."
]
} |
1510.04501 | 2949175204 | This paper presents an approach for metadata reconciliation, curation and linking for Open Governamental Data Portals (ODPs). ODPs have been lately the standard solution for governments willing to put their public data available for the society. Portal managers use several types of metadata to organize the datasets, one of the most important ones being the tags. However, the tagging process is subject to many problems, such as synonyms, ambiguity or incoherence, among others. As our empiric analysis of ODPs shows, these issues are currently prevalent in most ODPs and effectively hinders the reuse of Open Data. In order to address these problems, we develop and implement an approach for tag reconciliation in Open Data Portals, encompassing local actions related to individual portals, and global actions for adding a semantic metadata layer above individual portals. The local part aims to enhance the quality of tags in a single portal, and the global part is meant to interlink ODPs by establishing relations between tags. | The problem of semantic lifting in ODPs was tackled by @cite_10 @cite_7 . @cite_3 , a strategy for lifting datasets in ODPs to the Linked Data cloud is presented. In all these works, however, the semantic lifting refers to the datasets, and not to metadata. | {
"cite_N": [
"@cite_3",
"@cite_10",
"@cite_7"
],
"mid": [
"30797004",
"2029007784",
""
],
"abstract": [
"Recently, a large number of open data repositories, catalogs and portals have been emerging in the scientific and government realms. In this chapter, we characterise this newly emerging class of information systems. We describe the key functionality of open data portals, present a conceptual model and showcase the pan-European data portal PublicData.eu as a prominent example. Using examples from Serbia and Poland, we present an approach for lifting the often semantically shallow datasets registered at such data portals to Linked Data in order to make data portals the backbone of a distributed global data warehouse for our information society on the Web.",
"Governments and public administrations started recently to publish large amounts of structured data on the Web, mostly in the form of tabular data such as CSV files or Excel sheets. Various tools and projects have been launched aiming at facilitating the lifting of tabular data to reach semantically structured and linked data. However, none of these tools supported a truly incremental, pay-as-you-go data publication and mapping strategy, which enables effort sharing between data owners, community experts and consumers. In this article, we present an approach for enabling the user-driven semantic mapping of large amounts tabular data. We devise a simple mapping language for tabular data, which is easy to understand even for casual users, but expressive enough to cover the vast majority of potential tabular mappings use cases. We outline a formal approach for mapping tabular data to RDF. Default mappings are automatically created and can be revised by the community using a semantic wiki. The mappings are executed using a sophisticated streaming RDB2RDF conversion. We report about the deployment of our approach at the Pan-European data portal PublicData.eu, where we transformed and enriched almost 10,000 datasets accounting for 7.3 billion triples.",
""
]
} |
1510.04074 | 2952462748 | Assistive solutions for a better shopping experience can improve the quality of life of people, in particular also of visually impaired shoppers. We present a system that visually recognizes the fine-grained product classes of items on a shopping list, in shelves images taken with a smartphone in a grocery store. Our system consists of three components: (a) We automatically recognize useful text on product packaging, e.g., product name and brand, and build a mapping of words to product classes based on the large-scale GroceryProducts dataset. When the user populates the shopping list, we automatically infer the product class of each entered word. (b) We perform fine-grained product class recognition when the user is facing a shelf. We discover discriminative patches on product packaging to differentiate between visually similar product classes and to increase the robustness against continuous changes in product design. (c) We continuously improve the recognition accuracy through active learning. Our experiments show the robustness of the proposed method against cross-domain challenges, and the scalability to an increasing number of products with minimal re-training. | Closely related to our system are approaches that focus on grocery product recognition @cite_22 @cite_29 @cite_15 . A grocery product dataset of 120 product instances is proposed in @cite_17 . Each product is represented by an average of 5.6 training images downloaded from the web and test images are manually segmented from video streams of supermarket shelves. Each test image contains a single segmented product. A baseline approach of SIFT @cite_16 , color histogram, and boosted Haar-like @cite_23 feature matching is performed. | {
"cite_N": [
"@cite_22",
"@cite_29",
"@cite_23",
"@cite_15",
"@cite_16",
"@cite_17"
],
"mid": [
"",
"",
"2164598857",
"2144556792",
"2151103935",
"2127095579"
],
"abstract": [
"",
"",
"This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.",
"We present a study on grocery detection using our object detection system, ShelfScanner, which seeks to allow a visually impaired user to shop at a grocery store without additional human assistance. ShelfScanner allows online detection of items on a shopping list, in video streams in which some or all items could appear simultaneously. To deal with the scale of the object detection task, the system leverages the approximate planarity of grocery store shelves to build a mosaic in real time using an optical flow algorithm. The system is then free to use any object detection algorithm without incurring a loss of data due to processing time. For purposes of speed we use a multiclass naive-Bayes classifier inspired by NIMBLE, which is trained on enhanced SURF descriptors extracted from images in the GroZi-120 dataset. It is then used to compute per-class probability distributions on video keypoints for final classification. Our results suggest ShelfScanner could be useful in cases where high-quality training data is available.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"The problem of using pictures of objects captured under ideal imaging conditions (here referred to as in vitro) to recognize objects in natural environments (in situ) is an emerging area of interest in computer vision and pattern recognition. Examples of tasks in this vein include assistive vision systems for the blind and object recognition for mobile robots; the proliferation of image databases on the web is bound to lead to more examples in the near future. Despite its importance, there is still a need for a freely available database to facilitate study of this kind of training testing dichotomy. In this work one of our contributions is a new multimedia database of 120 grocery products, GroZi-120. For every product, two different recordings are available: in vitro images extracted from the web, and in situ images extracted from camcorder video collected inside a grocery store. As an additional contribution, we present the results of applying three commonly used object recognition detection algorithms (color histogram matching, SIFT matching, and boosted Haar-like features) to the dataset. Finally, we analyze the successes and failures of these algorithms against product type and imaging conditions, both in terms of recognition rate and localization accuracy, in order to suggest ways forward for further research in this domain."
]
} |
1510.04210 | 2253227230 | This work proposes the use of Information Theory for the characterization of vehicles behavior through their velocities. Three public data sets were used: (i) Mobile Century data set collected on Highway I-880, near Union City, California; (ii) Borlange GPS data set collected in the Swedish city of Borlange; and (iii) Beijing taxicabs data set collected in Beijing, China, where each vehicle speed is stored as a time series. The Bandt-Pompe methodology combined with the Complexity-Entropy plane were used to identify different regimes and behaviors. The global velocity is compatible with a correlated noise with f − k Power Spectrum with k ≥ 0. With this we identify traffic behaviors as, for instance, random velocities (k ≃ 0) when there is congestion, and more correlated velocities (k ≃ 3) in the presence of free traffic flow. Copyright EDP Sciences, SIF, Springer-Verlag Berlin Heidelberg 2015 | Many works employed the Complexity-Entropy plane for the characterization of time series in a diversity of applications: stock market evolution @cite_10 , brain behavior @cite_10 , climate change @cite_16 , and vehicles velocities variation @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_16",
"@cite_10"
],
"mid": [
"2159718545",
"1989998612",
"2157881370"
],
"abstract": [
"In this paper, the complexity-entropy causality plane approach is applied to analyze traffic data. The R S analysis and detrended fluctuation analysis (DFA) methods are also used to compare with this approach. Moreover, based on the concept of entropy, we propose to use permutation to calculate the probability distribution of the time series when applying the representation plane. The empirical results indicate that traffic dynamics exhibit different levels of traffic congestion and demonstrate that this statistical method can give a more refined classification of traffic states than the R S analysis and DFA.",
"A new methodology based on information theory is used to explore the evolution of the surface air temperature climate network over the Tropical Pacific region. Topological changes over the period 1948–2009 are investigated using windows of one year duration. Alternating states of lower higher efficiency in information transfer are consistently captured during the opposing phases of ENSO (i.e., El Nino and La Nina years). This cyclic information transfer behavior reflects a higher climatic stability for La Nina years which is in good agreement with current observations. In addition, after the 1976 77 climate shift, a change towards more frequent conditions of decreased information transfer efficiency is detected. Copyright EDP Sciences, SIF, Springer-Verlag Berlin Heidelberg 2012",
"Entropy is a powerful tool for the analysis of time series, as it allows describing the probability distributions of the possible state of a system, and therefore the information encoded in it. Nevertheless, important information may be codified also in the temporal dynamics, an aspect which is not usually taken into account. The idea of calculating entropy based on permutation patterns (that is, permutations defined by the order relations among values of a time series) has received a lot of attention in the last years, especially for the understanding of complex and chaotic systems. Permutation entropy directly accounts for the temporal information contained in the time series; furthermore, it has the quality of simplicity, robustness and very low computational cost. To celebrate the tenth anniversary of the original work, here we analyze the theoretical foundations of the permutation entropy, as well as the main recent applications to the analysis of economical markets and to the understanding of biomedical systems."
]
} |
1510.04210 | 2253227230 | This work proposes the use of Information Theory for the characterization of vehicles behavior through their velocities. Three public data sets were used: (i) Mobile Century data set collected on Highway I-880, near Union City, California; (ii) Borlange GPS data set collected in the Swedish city of Borlange; and (iii) Beijing taxicabs data set collected in Beijing, China, where each vehicle speed is stored as a time series. The Bandt-Pompe methodology combined with the Complexity-Entropy plane were used to identify different regimes and behaviors. The global velocity is compatible with a correlated noise with f − k Power Spectrum with k ≥ 0. With this we identify traffic behaviors as, for instance, random velocities (k ≃ 0) when there is congestion, and more correlated velocities (k ≃ 3) in the presence of free traffic flow. Copyright EDP Sciences, SIF, Springer-Verlag Berlin Heidelberg 2015 | @cite_18 analyzed the traffic congestion index (TCI) of Beijing which is defined by the Beijing Municipal Commission. The TCI is computed using information gathered by specialized devices (in vehicles, traffic lights etc.), all around the city. This index indicates the traffic state as congested or not. Using the Bandt-Pompe methodology, the authors showed that the Complexity-Entropy plane is the best way to classify the levels of traffic congestion using the TCI as reference, when compared to other methods: @math analysis () @cite_1 and DFA method () @cite_29 . The former is a statistical method used to evaluate the nature and the magnitude of the variability of the data over time, while the latter determines the behavior of data scale considering some trends, regardless their origin or shape. Differently from our approach, Ref. @cite_18 does not attempt at characterizing the underlying dynamics of the process. | {
"cite_N": [
"@cite_18",
"@cite_29",
"@cite_1"
],
"mid": [
"2159718545",
"2221831393",
""
],
"abstract": [
"In this paper, the complexity-entropy causality plane approach is applied to analyze traffic data. The R S analysis and detrended fluctuation analysis (DFA) methods are also used to compare with this approach. Moreover, based on the concept of entropy, we propose to use permutation to calculate the probability distribution of the time series when applying the representation plane. The empirical results indicate that traffic dynamics exhibit different levels of traffic congestion and demonstrate that this statistical method can give a more refined classification of traffic states than the R S analysis and DFA.",
"The healthy heartbeat is traditionally thought to be regulated according to the classical principle of homeostasis whereby physiologic systems operate to reduce variability and achieve an equilibrium‐like state [Physiol. Rev. 9, 399–431 (1929)]. However, recent studies [Phys. Rev. Lett. 70, 1343–1346 (1993); Fractals in Biology and Medicine (Birkhauser‐Verlag, Basel, 1994), pp. 55–65] reveal that under normal conditions, beat‐to‐beat fluctuations in heart rate display the kind of long‐range correlations typically exhibited by dynamical systems far from equilibrium [Phys. Rev. Lett. 59, 381–384 (1987)]. In contrast, heart rate time series from patients with severe congestive heart failure show a breakdown of this long‐range correlation behavior. We describe a new method—detrended fluctuation analysis (DFA)—for quantifying this correlation property in non‐stationary physiological time series. Application of this technique shows evidence for a crossover phenomenon associated with a change in short and long‐range scaling exponents. This method may be of use in distinguishing healthy from pathologic data sets based on differences in these scaling properties.",
""
]
} |
1510.04210 | 2253227230 | This work proposes the use of Information Theory for the characterization of vehicles behavior through their velocities. Three public data sets were used: (i) Mobile Century data set collected on Highway I-880, near Union City, California; (ii) Borlange GPS data set collected in the Swedish city of Borlange; and (iii) Beijing taxicabs data set collected in Beijing, China, where each vehicle speed is stored as a time series. The Bandt-Pompe methodology combined with the Complexity-Entropy plane were used to identify different regimes and behaviors. The global velocity is compatible with a correlated noise with f − k Power Spectrum with k ≥ 0. With this we identify traffic behaviors as, for instance, random velocities (k ≃ 0) when there is congestion, and more correlated velocities (k ≃ 3) in the presence of free traffic flow. Copyright EDP Sciences, SIF, Springer-Verlag Berlin Heidelberg 2015 | Other approaches used to characterize the vehicles velocities are: @cite_5 use the fractal dimension to analyze vehicles velocities in the data set collected by Highway Performance Measurement Project, in Beijing, China. The data were collected by using sensors on the roads which sampled the velocities in intervals of 20 . Results showed that the traffic, in the scenarios considered, have multifractal characteristics and the degree of fractality increases proportionally to traffic congestion. Moreover, the local H "older exponent (or roughness) @cite_36 was used to predict heavy traffic. This exponent can be used to measure the local rate of fractality. | {
"cite_N": [
"@cite_36",
"@cite_5"
],
"mid": [
"2029512392",
"1995899660"
],
"abstract": [
"In this paper we investigate from both a theoretical and a practical point of view the following problem: Let s be a function from [0;1] to [0;1] . Under which conditions does there exist a continuous function f from [0;1] to R such that the regularity of f at x , measured in terms of Holder exponent, is exactly s(x) , for all x ∈ [0;1] ?",
"In this paper, we applied multifractal modeling techniques to analyze the traffic data collected from the Beijing Yuquanying. The results indicated that multifractal characteristics obviously exist in the traffic system; the degree of fractality of these traffic data tends to increase as the traffic system becomes congested; the Holder exponent that measures the local rate of fractality may be used as indicators to predict the presence of the traffic congestion."
]
} |
1510.04210 | 2253227230 | This work proposes the use of Information Theory for the characterization of vehicles behavior through their velocities. Three public data sets were used: (i) Mobile Century data set collected on Highway I-880, near Union City, California; (ii) Borlange GPS data set collected in the Swedish city of Borlange; and (iii) Beijing taxicabs data set collected in Beijing, China, where each vehicle speed is stored as a time series. The Bandt-Pompe methodology combined with the Complexity-Entropy plane were used to identify different regimes and behaviors. The global velocity is compatible with a correlated noise with f − k Power Spectrum with k ≥ 0. With this we identify traffic behaviors as, for instance, random velocities (k ≃ 0) when there is congestion, and more correlated velocities (k ≃ 3) in the presence of free traffic flow. Copyright EDP Sciences, SIF, Springer-Verlag Berlin Heidelberg 2015 | @cite_14 compared human and vehicle mobility patterns, trying to apply models of the former to the latter. The results were twofold: i. Known models of human mobility can be refined to treat the vehicles mobility; and ii. They proved that the GPS data are an adequate representation of vehicles mobility. The data set used was collected in Italy with information about @math vehicles and approximately @math million trajectories. | {
"cite_N": [
"@cite_14"
],
"mid": [
"1990914926"
],
"abstract": [
"Are the patterns of car travel different from those of general human mobility? Based on a unique dataset consisting of the GPS trajectories of 10 million travels accomplished by 150,000 cars in Italy, we investigate how known mobility models apply to car travels, and illustrate novel analytical findings. We also assess to what extent the sample in our dataset is representative of the overall car mobility, and discover how to build an extremely accurate model that, given our GPS data, estimates the real traffic values as measured by road sensors."
]
} |
1510.04210 | 2253227230 | This work proposes the use of Information Theory for the characterization of vehicles behavior through their velocities. Three public data sets were used: (i) Mobile Century data set collected on Highway I-880, near Union City, California; (ii) Borlange GPS data set collected in the Swedish city of Borlange; and (iii) Beijing taxicabs data set collected in Beijing, China, where each vehicle speed is stored as a time series. The Bandt-Pompe methodology combined with the Complexity-Entropy plane were used to identify different regimes and behaviors. The global velocity is compatible with a correlated noise with f − k Power Spectrum with k ≥ 0. With this we identify traffic behaviors as, for instance, random velocities (k ≃ 0) when there is congestion, and more correlated velocities (k ≃ 3) in the presence of free traffic flow. Copyright EDP Sciences, SIF, Springer-Verlag Berlin Heidelberg 2015 | @cite_35 characterized time series of vehicle velocities using Complex Networks. They transformed the time series into a new series based on the reconstruction of the phase space. Then, complex networks of traffic flow were built and analyzed considering the degree distribution, density, and clustering coefficient. The results detected that the networks present communities structures, i.e., the nodes of the networks are grouped so that each group is highly connected. The data set used was collected in the city of Harbin, China, during one complete day with sampling at 2 , resulting in @math samples. | {
"cite_N": [
"@cite_35"
],
"mid": [
"2058532765"
],
"abstract": [
"A complex network is a powerful tool to research complex systems, traffic flow being one of the most complex systems. In this paper, we use complex network theory to study traffic time series, which provide a new insight into traffic flow analysis. Firstly, the phase space, which describes the evolution of the behavior of a nonlinear system, is reconstructed using the delay embedding theorem. Secondly, in order to convert the new time series into a complex network, the critical threshold is estimated by the characteristics of a complex network, which include degree distribution, cumulative degree distribution, and density and clustering coefficients. We find that the degree distribution of associated complex network can be fitted with a Gaussian function, and the cumulative degree distribution can be fitted with an exponential function. Density and clustering coefficients are then researched to reflect the change of connections between nodes in complex network, and the results are in accordance with the observation of the plot of an adjacent matrix. Consequently, based on complex network analysis, the proper range of the critical threshold is determined. Finally, to mine the nodes with the closest relations in a complex network, the modularity is calculated with the increase of critical threshold and the community structure is detected according to the optimal modularity. The work in our paper provides a new way to understand the dynamics of traffic time series."
]
} |
1510.03903 | 2798776840 | We study the fair division of a continuous resource, such as a land-estate or a time-interval, among pre-specified groups of agents, such as families. Each family is given a piece of the resource and this piece is used simultaneously by all family members, while different members may have different value functions. Three ways to assess the fairness of such a division are examined. (a) *Average fairness* means that each family's share is fair according to the "family value function", defined as the arithmetic mean of the value functions of the family members. (b) *Unanimous fairness* means that all members in all families feel that their family received a fair share according to their personal value function. (c) *Democratic fairness* means that in each family, at least half the members feel that their family's share is fair. We compare these criteria based on the number of connected components in the resulting division, and based on their compatibility with Pareto-efficiency. | The notion of fairness between groups has been studied empirically in the context of the well-known . In the standard version of this game, an individual agent (the ) suggests a division of a sum of money to another individual (the ), which can either approve or reject it. In the group version, either the proposer or the responder or both are groups of agents. The groups have to decide together what division to propose and whether to accept a proposed division. Experiments by @cite_15 @cite_6 show that, in general, groups tend to act more rationally by proposing and accepting divisions which are less fair. @cite_8 studies the effect of different group-decision rules while @cite_9 uses a threshold decision rule which is a generalized version of our majority rule (an allocation is accepted if at least @math agents in the responder group vote to accept it). These studies are only tangentially relevant to the present paper, since they deal with a much simpler division problem in which the divided good is (money) rather than heterogeneous (cake land). | {
"cite_N": [
"@cite_9",
"@cite_15",
"@cite_6",
"@cite_8"
],
"mid": [
"2099790472",
"",
"1521758236",
"2078607273"
],
"abstract": [
"Couples looking for jobs in the same labor market may cause instabilities. We deter- mine a natural preference domain, the domain of weakly responsive preferences, that guarantees stability. Under a restricted unemployment aversion condition we show that this domain is maximal for the existence of stable matchings. We illustrate how small deviations from (weak) responsiveness, that model the wish of couples to be closer together, cause instability, even when we use a weaker stability notion that excludes myopic blocking. Our remaining results deal with various properties of the set of stable matchings for \"responsive couples markets,\" viz., optimal- ity, filled positions, and manipulation. Journal of Economic Literature Classification Numbers: C78; J41.",
"",
"The allocation of resources within a system of autonomous agents, that not only havepreferences over alternative allocations of resources but also actively participate in com-puting an allocation, is an exciting area of research at the interface of Computer Scienceand Economics. This paper is a survey of some of the most salient issues in MultiagentResource Allocation. In particular, we review various languages to represent the pref-erences of agents over alternative allocations of resources as well as different measuresof social welfare to assess the overall quality of an allocation. We also discuss pertinentissues regarding allocation procedures and present important complexity results. Ourpresentation of theoretical issues is complemented by a discussion of software packagesfor the simulation of agent-based market places. We also introduce four major applica-tion areas for Multiagent Resource Allocation, namely industrial procurement, sharingof satellite resources, manufacturing control, and grid computing",
"Abstract We introduce an extension of Kaneko′s ratio equilibrium to the case of economies with many public and private goods. Specifically, we employ a general production model where public goods may be inputs in the production of public goods. Using the accounting concept of transfer prices, we propose a definition of ratio equilibrium that generalizes Kaneko′s. We prove that, under standard assumptions, ratio equilibria exist and are free-access core allocations and therefore Pareto efficient. Journal of Economic Literature Classification Numbers: D50, H41, D62."
]
} |
1510.03814 | 2272983229 | In many applications, we need to measure similarity between nodes in a large network based on features of their neighborhoods. Although in-network node similarity based on proximity has been well investigated, surprisingly, measuring in-network node similarity based on neighborhoods remains a largely untouched problem in literature. One challenge is that in different applications we may need different measurements that manifest different meanings of similarity. Furthermore, we often want to make trade-offs between specificity of neighborhood matching and efficiency. In this paper, we investigate the problem in a principled and systematic manner. We develop a unified parametric model and a series of four instance measures. Those instance similarity measures not only address a spectrum of various meanings of similarity, but also present a series of trade-offs between computational cost and strictness of matching between neighborhoods of nodes being compared. By extensive experiments and case studies, we demonstrate the effectiveness of the proposed model and its instances. | Node proximity is widely adopted as similarity measures in many data mining and machine learning tasks. Koutra @cite_18 proposed a unified framework of Guilt-by-Association. A wide range of popular proximity measures, such as Personalized PageRank @cite_9 , Random Walk with Restart @cite_10 and Generalized Belief Propagation @cite_28 , are special cases or can be approximated by Guilt-by-Association. The basic assumption behind those proximity-based measures is that nodes in a network influence each other through edges and paths. Thus, nodes within the same community are often more similar than those from different communities. Apparently, proximity-based measures cannot be used to solve the problems demonstrated in Example . | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_18",
"@cite_10"
],
"mid": [
"2169415915",
"2069153192",
"1906783320",
"2133299088"
],
"abstract": [
"Important inference problems in statistical physics, computer vision, error-correcting coding theory, and artificial intelligence can all be reformulated as the computation of marginal probabilities on factor graphs. The belief propagation (BP) algorithm is an efficient way to solve these problems that is exact when the factor graph is a tree, but only approximate when the factor graph has cycles. We show that BP fixed points correspond to the stationary points of the Bethe approximation of the free energy for a factor graph. We explain how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms. We emphasize the conditions a free energy approximation must satisfy in order to be a \"valid\" or \"maxent-normal\" approximation. We describe the relationship between four different methods that can be used to generate valid approximations: the \"Bethe method\", the \"junction graph method\", the \"cluster variation method\", and the \"region graph method\". Finally, we explain how to tell whether a region-based approximation, and its corresponding GBP algorithm, is likely to be accurate, and describe empirical results showing that GBP can significantly outperform BP.",
"Recent web search techniques augment traditional text matching with a global notion of \"importance\" based on the linkage structure of the web, such as in Google's PageRank algorithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance--for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible personalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our approach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algorithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques.",
"If several friends of Smith have committed petty thefts, what would you say about Smith? Most people would not be surprised if Smith is a hardened criminal. Guilt-by-association methods combine weak signals to derive stronger ones, and have been extensively used for anomaly detection and classification in numerous settings (e.g., accounting fraud, cyber-security, calling-card fraud). The focus of this paper is to compare and contrast several very successful, guilt-by-association methods: Random Walk with Restarts, Semi-Supervised Learning, and Belief Propagation (BP). Our main contributions are two-fold: (a) theoretically, we prove that all the methods result in a similar matrix inversion problem; (b) for practical applications, we developed FaBP, a fast algorithm that yields 2× speedup, equal or higher accuracy than BP, and is guaranteed to converge. We demonstrate these benefits using synthetic and real datasets, including YahooWeb, one of the largest graphs ever studied with BP.",
"How closely related are two nodes in a graph? How to compute this score quickly, on huge, disk-resident, real graphs? Random walk with restart (RWR) provides a good relevance score between two nodes in a weighted graph, and it has been successfully used in numerous settings, like automatic captioning of images, generalizations to the \"connection subgraphs\", personalized PageRank, and many more. However, the straightforward implementations of RWR do not scale for large graphs, requiring either quadratic space and cubic pre-computation time, or slow response time on queries. We propose fast solutions to this problem. The heart of our approach is to exploit two important properties shared by many real graphs: (a) linear correlations and (b) block- wise, community-like structure. We exploit the linearity by using low-rank matrix approximation, and the community structure by graph partitioning, followed by the Sherman- Morrison lemma for matrix inversion. Experimental results on the Corel image and the DBLP dabasets demonstrate that our proposed methods achieve significant savings over the straightforward implementations: they can save several orders of magnitude in pre-computation and storage cost, and they achieve up to 150x speed up with 90 + quality preservation."
]
} |
1510.03814 | 2272983229 | In many applications, we need to measure similarity between nodes in a large network based on features of their neighborhoods. Although in-network node similarity based on proximity has been well investigated, surprisingly, measuring in-network node similarity based on neighborhoods remains a largely untouched problem in literature. One challenge is that in different applications we may need different measurements that manifest different meanings of similarity. Furthermore, we often want to make trade-offs between specificity of neighborhood matching and efficiency. In this paper, we investigate the problem in a principled and systematic manner. We develop a unified parametric model and a series of four instance measures. Those instance similarity measures not only address a spectrum of various meanings of similarity, but also present a series of trade-offs between computational cost and strictness of matching between neighborhoods of nodes being compared. By extensive experiments and case studies, we demonstrate the effectiveness of the proposed model and its instances. | A major group of in-network similarity measures are based on how the nodes in question are connected with the other nodes. SimRank @cite_11 is one of the most popularly used in-network similarity measures. It is based on the principle that two nodes are similar if they are linked to similar nodes. Specifically, to compute the similarity between nodes @math and @math , SimRank aggregates paths between other nodes @math and @math and between @math and @math , respectively. SimRank has an undesirable property: if the lengths of paths between @math and @math are all odd, then @math , that is, @math and @math cannot be perfectly similar to each other. To fix this issue, some variations of SimRank were proposed @cite_6 @cite_19 . Those variations are still based on aggregation of paths between nodes. PathSim @cite_21 , a recently proposed similarity measure, employed a similar idea of counting paths following predefined patterns between nodes to derive similarity. In SimRank and its variations, as well as PathSim, two nodes may not be similar even if their neighborhoods are isomorphic. Those methods cannot be used to measure the similarity in the applications in Example . | {
"cite_N": [
"@cite_19",
"@cite_21",
"@cite_6",
"@cite_11"
],
"mid": [
"1545879303",
"103340358",
"2140445565",
"2117831564"
],
"abstract": [
"Similarity assessment is one of the core tasks in hyperlink analysis. Recently, with the proliferation of applications, e.g., web search and collaborative filtering, SimRank has been a well-studied measure of similarity between two nodes in a graph. It recursively follows the philosophy that \"two nodes are similar if they are referenced (have incoming edges) from similar nodes\", which can be viewed as an aggregation of similarities based on incoming paths. Despite its popularity, SimRank has an undesirable property, i.e., \"zero-similarity\": It only accommodates paths with equal length from a common \"center\" node. Thus, a large portion of other paths are fully ignored. This paper attempts to remedy this issue. (1) We propose and rigorously justify SimRank*, a revised version of SimRank, which resolves such counter-intuitive \"zero-similarity\" issues while inheriting merits of the basic SimRank philosophy. (2) We show that the series form of SimRank* can be reduced to a fairly succinct and elegant closed form, which looks even simpler than SimRank, yet enriches semantics without suffering from increased computational cost. This leads to a fixed-point iterative paradigm of SimRank* in O(Knm) time on a graph of n nodes and m edges for K iterations, which is comparable to SimRank. (3) To further optimize SimRank* computation, we leverage a novel clustering strategy via edge concentration. Due to its NP-hardness, we devise an efficient and effective heuristic to speed up SimRank* computation to O(Knm) time, where m is generally much smaller than m. (4) Using real and synthetic data, we empirically verify the rich semantics of SimRank*, and demonstrate its high computation efficiency.",
"Similarity search is a primitive operation in database and Web search engines. With the advent of large-scale heterogeneous information networks that consist of multi-typed, interconnected objects, such as the bibliographic networks and social media networks, it is important to study similarity search in such networks. Intuitively, two objects are similar if they are linked by many paths in the network. However, most existing similarity measures are defined for homogeneous networks. Different semantic meanings behind paths are not taken into consideration. Thus they cannot be directly applied to heterogeneous networks. In this paper, we study similarity search that is defined among the same type of objects in heterogeneous networks. Moreover, by considering different linkage paths in a network, one could derive various similarity semantics. Therefore, we introduce the concept of meta path-based similarity, where a meta path is a path consisting of asequence of relations defined between different object types (i.e., structural paths at the meta level). No matter whether a user would like to explicitly specify a path combination given sufficient domain knowledge, or choose the best path by experimental trials, or simply provide training examples to learn it, meta path forms a common base for a network-based similarity search engine. In particular, under the meta path framework we define a novel similarity measure called PathSim that is able to find peer objects in the network (e.g., find authors in the similar field and with similar reputation), which turns out to be more meaningful in many scenarios compared with random-walk based similarity measures. In order to support fast online query processing for PathSim queries, we develop an efficient solution that partially materializes short meta paths and then concatenates them online to compute top-k results. Experiments on real data sets demonstrate the effectiveness and efficiency of our proposed paradigm.",
"A key task in analyzing social networks and other complex networks is role analysis: describing and categorizing nodes by how they interact with other nodes. Two nodes have the same role if they interact with equivalent sets of neighbors. The most fundamental role equivalence is automorphic equivalence. Unfortunately, the fastest algorithm known for graph automorphism is nonpolynomial. Moreover, since exact equivalence is rare, a more meaningful task is measuring the role similarity between any two nodes. This task is closely related to the link-based similarity problem that SimRank addresses. However, SimRank and other existing simliarity measures are not sufficient because they do not guarantee to recognize automorphically or structurally equivalent nodes. This paper makes two contributions. First, we present and justify several axiomatic properties necessary for a role similarity measure or metric. Second, we present RoleSim, a role similarity metric which satisfies these axioms and which can be computed with a simple iterative algorithm. We rigorously prove that RoleSim satisfies all the axiomatic properties and demonstrate its superior interpretative power on both synthetic and real datasets.",
"The problem of measuring \"similarity\" of objects arises in many applications, and many domain-specific measures have been developed, e.g., matching text across documents or computing overlap among item-sets. We propose a complementary approach, applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, we compute a measure that says \"two objects are similar if they are related to similar objects:\" This general similarity measure, called SimRank, is based on a simple and intuitive graph-theoretic model. For a given domain, SimRank can be combined with other domain-specific similarity measures. We suggest techniques for efficient computation of SimRank scores, and provide experimental results on two application domains showing the computational feasibility and effectiveness of our approach."
]
} |
1510.03814 | 2272983229 | In many applications, we need to measure similarity between nodes in a large network based on features of their neighborhoods. Although in-network node similarity based on proximity has been well investigated, surprisingly, measuring in-network node similarity based on neighborhoods remains a largely untouched problem in literature. One challenge is that in different applications we may need different measurements that manifest different meanings of similarity. Furthermore, we often want to make trade-offs between specificity of neighborhood matching and efficiency. In this paper, we investigate the problem in a principled and systematic manner. We develop a unified parametric model and a series of four instance measures. Those instance similarity measures not only address a spectrum of various meanings of similarity, but also present a series of trade-offs between computational cost and strictness of matching between neighborhoods of nodes being compared. By extensive experiments and case studies, we demonstrate the effectiveness of the proposed model and its instances. | Recently, Henderson @cite_5 proposed RolX, which applied Non-Negative Matrix Factorization to softly cluster nodes in a network. It assumes a node-feature matrix, which is obtained by recursive feature extraction @cite_22 . RolX factorizes the node-feature matrix into a node-role matrix and a role-feature matrix. Then, the node-role matrix can be exploited to compute similarities between nodes. Gilpin @cite_13 introduced supervision to RolX. Since these methods assume a node-feature matrix, they are not specific for in-network neighborhood-based similarity measurement. | {
"cite_N": [
"@cite_5",
"@cite_13",
"@cite_22"
],
"mid": [
"2057685268",
"2134280081",
"2001325956"
],
"abstract": [
"Given a network, intuitively two nodes belong to the same role if they have similar structural behavior. Roles should be automatically determined from the data, and could be, for example, \"clique-members,\" \"periphery-nodes,\" etc. Roles enable numerous novel and useful network-mining tasks, such as sense-making, searching for similar nodes, and node classification. This paper addresses the question: Given a graph, how can we automatically discover roles for nodes? We propose RolX (Role eXtraction), a scalable (linear in the number of edges), unsupervised learning approach for automatically extracting structural roles from general network data. We demonstrate the effectiveness of RolX on several network-mining tasks: from exploratory data analysis to network transfer learning. Moreover, we compare network role discovery with network community discovery. We highlight fundamental differences between the two (e.g., roles generalize across disconnected networks, communities do not); and show that the two approaches are complimentary in nature.",
"Role discovery in graphs is an emerging area that allows analysis of complex graphs in an intuitive way. In contrast to community discovery, which finds groups of highly connected nodes, role discovery finds groups of nodes that share similar topological structure in the graph, and hence a common role (or function) such as being a broker or a periphery node. However, existing work so far is completely unsupervised, which is undesirable for a number of reasons. We provide an alternating least squares framework that allows convex constraints to be placed on the role discovery problem, which can provide useful supervision. In particular we explore supervision to enforce i) sparsity, ii) diversity, and iii) alternativeness in the roles. We illustrate the usefulness of this supervision on various data sets and applications.",
"Given a graph, how can we extract good features for the nodes? For example, given two large graphs from the same domain, how can we use information in one to do classification in the other (i.e., perform across-network classification or transfer learning on graphs)? Also, if one of the graphs is anonymized, how can we use information in one to de-anonymize the other? The key step in all such graph mining tasks is to find effective node features. We propose ReFeX (Recursive Feature eXtraction), a novel algorithm, that recursively combines local (node-based) features with neighborhood (egonet-based) features; and outputs regional features -- capturing \"behavioral\" information. We demonstrate how these powerful regional features can be used in within-network and across-network classification and de-anonymization tasks -- without relying on homophily, or the availability of class labels. The contributions of our work are as follows: (a) ReFeX is scalable and (b) it is effective, capturing regional (\"behavioral\") information in large graphs. We report experiments on real graphs from various domains with over 1M edges, where ReFeX outperforms its competitors on typical graph mining tasks like network classification and de-anonymization."
]
} |
1510.03814 | 2272983229 | In many applications, we need to measure similarity between nodes in a large network based on features of their neighborhoods. Although in-network node similarity based on proximity has been well investigated, surprisingly, measuring in-network node similarity based on neighborhoods remains a largely untouched problem in literature. One challenge is that in different applications we may need different measurements that manifest different meanings of similarity. Furthermore, we often want to make trade-offs between specificity of neighborhood matching and efficiency. In this paper, we investigate the problem in a principled and systematic manner. We develop a unified parametric model and a series of four instance measures. Those instance similarity measures not only address a spectrum of various meanings of similarity, but also present a series of trade-offs between computational cost and strictness of matching between neighborhoods of nodes being compared. By extensive experiments and case studies, we demonstrate the effectiveness of the proposed model and its instances. | In a broader scope, the social science community studied the role of node problem @cite_2 . The approaches are mainly based on finding an equivalence relations on nodes so that the nodes can be grouped into equivalent classes. Sparrow @cite_20 and Borgatti and Everett @cite_1 proposed algorithms for finding structural equivalence and regular equivalence classes in which automorphic equivalent nodes are assigned to the same class. However, an equivalence relation on nodes cannot provide node-pair similarities. Thus, those methods cannot be applied in many data mining and machine learning tasks. | {
"cite_N": [
"@cite_1",
"@cite_20",
"@cite_2"
],
"mid": [
"2163308020",
"2073773946",
"2017099446"
],
"abstract": [
"Abstract In this paper we present two algorithms for computing the extent of regular equivalence among pairs of nodes in a network. The first algorithm, REGE, is well known, but has not previously been described in the literature. The second algorithm, CATREGE, is new. Whereas REGE is applicable to quantitative data, CATREGE is used for categorical data. For binary data, either algorithm may be used, though the CATREGE algorithm is significantly faster and its output similarity coefficients have better metric properties. The CATREGE algorithm is also useful pedagogically, because it is easier to grasp.",
"Abstract An efficient method for computing role (automorphic) equivalences in large networks is described. Numerical signatures (real numbers) are generated for each node. Role-identical nodes share common numerical signatures. The decomposition of the node set into classes by reference to the numerical signatures helps determine the automorphic equivalence classes of a network. The technique is applicable to symmetric and directed networks, and to graphs with multiple link types. The algorithm is linear with respect to the number of links in the network. Its theoretical foundation exploits properties of transcendental numbers.",
"The aim of this paper is to understand the interrelations among relations within concrete social groups. Social structure is sought, not ideal types, although the latter are relevant to interrelations among relations. From a detailed social network, patterns of global relations can be extracted, within which classes of equivalently positioned individuals are delineated. The global patterns are derived algebraically through a ‘functorial’ mapping of the original pattern. Such a mapping (essentially a generalized homomorphism) allows systematically for concatenation of effects through the network. The notion of functorial mapping is of central importance in the ‘theory of categories,’ a branch of modern algebra with numerous applications to algebra, topology, logic. The paper contains analyses of two social networks, exemplifying this approach."
]
} |
1510.03814 | 2272983229 | In many applications, we need to measure similarity between nodes in a large network based on features of their neighborhoods. Although in-network node similarity based on proximity has been well investigated, surprisingly, measuring in-network node similarity based on neighborhoods remains a largely untouched problem in literature. One challenge is that in different applications we may need different measurements that manifest different meanings of similarity. Furthermore, we often want to make trade-offs between specificity of neighborhood matching and efficiency. In this paper, we investigate the problem in a principled and systematic manner. We develop a unified parametric model and a series of four instance measures. Those instance similarity measures not only address a spectrum of various meanings of similarity, but also present a series of trade-offs between computational cost and strictness of matching between neighborhoods of nodes being compared. By extensive experiments and case studies, we demonstrate the effectiveness of the proposed model and its instances. | Another line of research related to our study is graph similarity measures. Comparing graphs directly is computationally costly. A major idea of comparing graphs is to convert graphs to feature vectors. For example, Fei and Huan @cite_4 used frequent subgraphs as features of graphs. The most widely studied graph similar measures are graph kernels @cite_8 @cite_3 @cite_23 @cite_7 , which project graphs into a feature space of finite or infinite dimensionality, and use the inner product of the feature vectors of the two graphs as the similarity score. The features in graph kernels are often substructures of graphs, e.g., walks, subtrees or graphlets. In , we will illustrate why such graph similarity measures cannot be used directly to solve the in-network node similarity problem. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_23"
],
"mid": [
"2121727356",
"1483941173",
"1816257748",
"1835509607",
"2159156271"
],
"abstract": [
"With the development of highly efficient graph data collection technology in many application fields, classification of graph data emerges as an important topic in the data mining and machine learning community. Towards building highly accurate classification models for graph data, here we present an efficient graph feature selection method. In our method, we use frequent subgraphs as features for graph classification. Different from existing methods, we consider the spatial distribution of the subgraph features in the graph data and select those ones that have consistent spatial location. We have applied our feature selection methods to several cheminformatics benchmarks. Our method demonstrates a significant improvement of prediction as compared to the state-of-the-art methods.",
"This book addresses the question of how far it is possible to go in knowledge representation and reasoning by representing knowledge with graphs (in the graph theory sense) and reasoning with graph operations. The authors have carefully structured the book with the first part covering basic conceptual graphs, the second developing the computational aspects, and the final section pooling the kernel extensions. An appendix summarizes the basic mathematical notions. This is the first book to provide a comprehensive view on the computational facets of conceptual graphs. The mathematical prerequisites are minimal and the material presented can be used in artificial intelligence courses at graduate level upwards.",
"As most ‘real-world’ data is structured, research in kernel methods has begun investigating kernels for various kinds of structured data. One of the most widely used tools for modeling structured data are graphs. An interesting and important challenge is thus to investigate kernels on instances that are represented by graphs. So far, only very specific graphs such as trees and strings have been considered.",
"A new kernel function between two labeled graphs is presented. Feature vectors are defined as the counts of label paths produced by random walks on graphs. The kernel computation finally boils down to obtaining the stationary state of a discrete-time linear system, thus is efficiently performed by solving simultaneous linear equations. Our kernel is based on an infinite dimensional feature space, so it is fundamentally different from other string or tree kernels based on dynamic programming. We will present promising empirical results in classification of chemical compounds.",
"State-of-the-art graph kernels do not scale to large graphs with hundreds of nodes and thousands of edges. In this article we propose to compare graphs by counting graphlets, i.e., subgraphs with k nodes where k ∈ 3, 4, 5 . Exhaustive enumeration of all graphlets being prohibitively expensive, we introduce two theoretically grounded speedup schemes, one based on sampling and the second one specifically designed for bounded degree graphs. In our experimental evaluation, our novel kernels allow us to efficiently compare large graphs that cannot be tackled by existing graph kernels."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.